Actors (Team): developers - 2 people, admin - 1 person.
The article focuses on the use of technologies such as Ansible, Docker Swarm, Jenkins and Portainer for implementing the CI / CD pipeline with the ability to control it using a beautiful web interface.

')
Introduction
What does a developer usually want? He wants to create, without thinking about money, and as quickly as possible to see the results of his own creativity.
On the other hand, there is a business that wants money, and more, and therefore constantly thinks about reducing the time it takes to bring a product to market. In other words, the business dreams of speeding up the acquisition of MVP (aka Minimum Viable Product) in new products or when updating existing ones.
Well, what does the admin want? And the admin is a simple person, he wants the service to not fall and not interfere with playing
Kvaku Tanki and so that developers and business would trickle him less often.
Since to realize the desires of the admin, as the truth of life shows, the dreams of other heroes must be realized by his forces, the representatives of the IT crowd worked a lot on this. Often it was possible to achieve the desired, adhering to the DevOps methodology and implementing the principles of CI / CD (Continuous Integration and Delivery).
This happened in one small new project in the Ural IT Directorate, in which it was possible to implement the full pipeline from the publication of changes to the source code in the version control system by the developer to the automatic launch of the new version of the application in a test environment in a very short time.
Part 0. Task description
System architecture
After some discussions, the team chose the following two-tier architecture:
- Java backend, implemented on the Spring Boot framework, communicating with various databases and other corporate systems (because it is easy, quick and clear how to write).
- Frontend-part on NodeJS (and ReactJS - interface in the browser), because it works very quickly.
In turn, another NGINX server was added to these components, which is a frontend for the NodeJS application. Its role was to distribute requests between the application itself and other infrastructure components of the system, which are discussed below.
What did the team want?
As soon as the green light was given to the new project, the first technical task appeared, namely the preparation of the “equipment” for the launch of the new project. Since it was obvious to all participants that without maximum efficiency of rolling out new versions to the servers, the development of the project would be very difficult, it was immediately decided to follow the path of a full CI / CD, i.e. I wanted to reach the next pipeline:
- Developer publishes changes (commit) to version control system (git);
- git conducts the minimum required testing of the contents of a commit for the presence of necessary attributes (for example, the correct format of the commit message), compliance with the Bank's style and other bureaucracy;
- the git-server by means of the web hooks mechanism pulls the Jenkins continuous integration server;
- Jenkins launches the operation of downloading the current version of the sources from git-a and executing the CI / CD pipeline:
- source compilation and initial testing;
- assembling a new version of a Docker image (it would be indecent to deploy something in 2018 on bare hardware or virtual machine, they won't understand);
- the publication of images in Artifactory (storage system and management of binary artifacts, I recommend!);
- restarting the new version of the application (or the entire “stack” of applications) on the server with a “rollback” to the previous version if the update is not the most successful.
Framework
People in the subject probably already had a question: “Why do they use crutches of some kind, rather than 'production-ready'-solutions à la Kubernetes or Mesos / Marathon?”. A similar question is quite reasonable, so we immediately say that the described solution was used for a number of reasons, including:
- It is simpler (much, much simpler);
- He was easier to understand the whole team and deploy admin.
However, we do not forget that the solution chosen by us belongs to a rich family of crutches, and we hope in the near future to move to a more standard stack of OpenShift + Bamboo.
In addition, this article applies only to web-based applications, and also tells about the ideal case of stateless-architecture, when data probably somewhere there, but they are far away, and we do not think about storing them.
Part 1. Installation and basic software setup on the host system
In order to maximize automation and high reproducibility of the entire chain, it was decided to configure the host system (VMWare / qemu KVM / cloud / something else virtual machine) using the Ansible configuration management system.
It should be added that in addition to easy repeatability and reproducibility, the use of such systems (except Ansible, there are also systems Puppet and Chef) has a huge advantage over the use of various shell- or python-scripts in the form of the presence of idempotency, i.e. properties, in which, when restarted, the final state of the system does not change.
This advantage arises from the fact that when using configuration management systems, it is not the process of achieving the desired state that is described, but the desired state itself in a declarative form.
1.1 ssh HostKeyChecking
By default, Ansible respects security and checks the ssh fingerprint of the remote configurable host. Since In this mode, the ability to authorize with a password is disabled, then during the initial setup of the server, you must either disable HostKeyChecking or pre-add the fingerprint to the local cache. The latter can be achieved in two ways:
or so:Define a special environment variable:
$ export ANSIBLE_HOST_KEY_CHECKING=False
or in another way:Add the host_key_checking parameter to the local configuration file ansible.cfg:
[defaults] host_key_checking = False
In the first method, the check is turned off only as long as such an environment variable exists, and in the second method - completely for this host.
1.2 Inventory
Inventory is an entity in the Ansible system, with the help of which the hosts and their groups are described, whose configuration must be managed.
Inventory can be described in ini or yaml format. This project was selected last.
An example of the hosts.yml file:
#_ all all: hosts: # , Ansible some-cool-vm-host vars: # , ansible_user: 'root' # , :-( ansible_password: '12345678' # corp_ca_crt: "-----BEGIN CERTIFICATE----- ... -----END CERTIFICATE-----"
For those who first encountered the yaml format, I would like to note that all indents in this format should be drawn with spaces.
1.3 Playbook
A playbook is another entity in Ansible that directly declaratively describes the desired end state of the hosts and groups from Inventory. Just like almost everything in Ansible, the playbook is described in the file (s) in the yaml format.
In order to start the playbook file for execution, you need to run a command like this:
ansible-playbook -i ./hosts.yml tasks.yml
This playbook described the complete configuration of the base system with the creation of the necessary users and the installation of Docker:
#_ , () - hosts: all tasks: # - name: shell: rm /etc/zypp/repos.d
Part 2. Project services
2.1 CI Server and Process
The well-known
Jenkins CI server is responsible for the process of “continuous integration and deployment” in the project.
The Ansible playbook code above is designed so that at the end of its execution, the server already runs a freshly installed Jenkins (in the Docker container), and its temporary password is stored on the LOCAL machine in the initialJenkinsAdminPassword.txt file.
Since the whole team wanted to get as close as possible to the ideal case of
Infrastructure as code (IaC) , in the project the tasks were implemented as
declarative and Jenkins
scripted pipelines , when the tasks are described in Groovy Script language, and their code is stored next to the source code of the project in version control system (git).
An example pipeline for assembling the backend part of an application on Spring Boot is shown below:
pipeline { agent { # , # Docker- : docker { image 'java:8-jdk' } } stages { stage(' ') { steps { checkout scm } } stage('') { steps { sh 'chmod +x ./gradlew' sh './gradlew build -x test' } } stage('') { steps { script { sh './gradlew test' } } } } } #_ Docker- Artifactory: node { stage(' ') { docker.withRegistry("https://repo.artifactory.bank", "LoginToArtifactory") { def dkrImg = docker.build("repo.artifactory.bank/dev-backend:${env.BUILD_ID}") dkrImg.push() dkrImg.push('latest') } } stage(' Artifactory') { docker.withRegistry("https://repo.artifactory.bank", "LoginToArtifactory") { sh "docker service update --image repo.artifactory.bank/dev-backend:${env.BUILD_ID} SMB_dev-backend" } } }
Separately, I would like to note that when assembling images, each version gets its own tag (tag), which greatly simplifies the process of auto-restarting the application.
2.2 Portainer
To facilitate the interaction of all team members with Docker in the project, we used a simple web interface for it -
Portainer . This application, as well as Docker itself, is written in the Go language, and therefore it is distinguished by high performance with extremely easy deployment.
For example, in the simplest case, the following command will launch Porteyner on port 9000 of the host system:
docker run -d \ -p 9000:9000 \ -v /var/run/docker.sock:/var/run/docker.sock \ portainer/portainer
However, in the current project it was decided to use the functionality of the “orchestration” tool for one host -
Docker Compose .
2.3 Docker containers and services
All the necessary applications and services in this project are launched through a simple docker-compose.yml file.
The basic set of “infrastructure” services is launched using the following description:
version: '3.4' services: # NGINX nginx: image: "nginx:1" container_name: fe-nginx restart: always volumes: - /APP/configs/nginx.conf:/etc/nginx/nginx.conf - /APP/logs/nginx:/var/log/nginx - /usr/share/zoneinfo/Europe/Moscow:/etc/localtime:ro networks: - int ports: - "80:80/tcp" - "8080:80/tcp" # Jenkins CI - , CI/CD- ci: image: "jenkins/jenkins:lts" container_name: ci-jenkins restart: always volumes: - /usr/share/zoneinfo/Europe/Moscow:/etc/localtime:ro - /APP/jenkins/master:/var/jenkins_home environment: JENKINS_OPTS: '--prefix=/jenkinsci' JAVA_OPTS: '-Xmx512m' networks: int: aliases: - srv-ci # - Docker- portainer: image: "portainer/portainer:latest" volumes: - type: bind source: /var/run/docker.sock target: /var/run/docker.sock - type: bind source: /APP/portainer_data target: /data networks: int: aliases: - srv-portainer command: -H 'unix:/
2.4 Docker Swarm cluster without a cluster
As you can see in the docker-compose.yml file above, first, there are no references to the backend and frontend parts of the application, and there is also a link to the “external” (external: true) network called int. External are any resources (networks, volumes, and other existing entities) that are not declared in one file.
The fact is that in the project we needed to be able to restart “services” when updating the version of the image in the Docker repository of the Artifactory, and similar functions are present in the Docker Swarm services (Docker container orchestration system built into the Docker multi-master) from the box". This feature is realized through the ability to change the desired image from the running service, and if there is a new version of the image in the repository, the restart will occur automatically. In the case, if the version has not changed - the service container continues to run regularly.
As for the network, launching the application as a Docker Swarm service (the yaml description is presented below), we needed to preserve the network connectivity of its components and the NGINX server announced above. This was achieved through the creation on the server of a cluster Overlay-network, which included the basic services described above, and the application components directly:
docker network create -d overlay --subnet 10.1.2.254/24 --attachable int
(The --attachable key is necessary, because without it, the basic services do not have access to the cluster network)
Description of the components of the application with two services:
version: '3.2' services: pre-live-backend: image:repo.artifactory.bank/dev-backend:latest deploy: mode: replicated replicas: 1 networks: - int pre-live-front: image: repo.artifactory.bank/dev-front:latest deploy: mode: replicated replicas: 1 networks: - int networks: int: external: true
Conclusion
As it was noted at the beginning, at the start of the project, the team wanted to get all the benefits of the DevOps approach, in particular, to organize the process of continuous delivery of code from the git repository to the “combat” server as an application running on it. At the same time, at the current stage, I did not want to completely withdraw from the accumulated practices and rebuild ourselves to live in the world of large orchestrators. The described architecture of the system, which was thought out and implemented in less than 2 weeks (in parallel with other projects that the team members worked on), eventually allowed us to achieve what we wanted. We believe that this material should be interesting and useful to other teams implementing DevOps approaches in life.