📜 ⬆️ ⬇️

How we implemented continuous delivery of updates to the customer platform

We at True Engineering have set up a process of continuous delivery of updates to the customer’s server and want to share this experience.

To begin with, we developed an online system for the customer and deployed it in our own cluster Kubernetes. Now our highly loaded solution has moved to the customer platform, for which we have set up a fully automatic Continuous Deployment process. Due to this, we have accelerated the time-to-market - the delivery of changes to the grocery environment.

In this article, we will cover all the steps of the Continuous Deployment (CD) process or the delivery of updates to the customer platform:
')
  1. how this process starts
  2. synchronization with the customer's Git repository,
  3. build backend and frontend,
  4. automatic application deployment in a test environment
  5. automatic deployment on prod.

In the process of sharing the details of the settings.



1. Start CD


Continuous Deployment begins with the developer posting changes to the release branch of our Git repository.

Our application works on the basis of microservice architecture and all its components are stored in one repository. Thanks to this, all microservices are assembled and installed, even if one of them has changed.

We organized work through one repository for several reasons:


2. Synchronization with customer's Git-repository source code


Changes made are automatically synchronized with the customer's Git repository. There is configured the application assembly, which runs after updating the branch, and deployment to the prod. Both processes occur in their environment from the Git repository.

We cannot work with the customer's repository directly, because we need our own development and testing environments. We use our Git repository for this purpose - it is synchronized with their Git repository. As soon as the developer uploads the changes to the appropriate branch of our repository, GitLab immediately sends these changes to the customer.



After that, you need to make an assembly. It consists of several stages: the assembly of the backend and the frontend, testing and delivery to the prod.

3. Build backend and frontend


Assembling backend and frontend are two parallel tasks that are performed in the GitLab Runner system. Its configuration of the initial assembly lies in the same repository.

Tutorial for writing a YAML script for building in GitLab .

GitLab Runner takes the code from the required repository, collects a Java application with the build command and sends it to the Docker registry. Here we collect backend and frontend, we get Docker-images, which we add to the repository on the customer side. To manage Doker images, use the Gradle plugin .

We synchronize the versions of our images with the version of the release that will be posted in Docker. For smooth work, we made a few settings:

1. Between the test environment and the grocery containers are not reassembled. We made parametrization so that the same container could work without reassembly with all settings, variable environments and services both in the test environment and in the sale.

2. To update the application via Helm, you must specify its version. We have a build of backend, frontend and application update - these are three different tasks, so it’s important to use the same version of the application everywhere. For this task we use data from the history of Git, since we have the configuration of the K8S cluster and the applications are in the same Git repository.

The version of the application we get from the results of the command
git describe --tags --abbrev=7 .

4. Automatically Deploy All Changes in a Test Environment (UAT)


The next step in this build script is to automatically update the K8S cluster. This happens on condition that the entire application is assembled and all artifacts are published in the Docker Registry. After this, the test environment update is launched.

Cluster update is launched using Helm Update . If, as a result, something does not go according to plan, then Helm will automatically and automatically roll back all its changes. His work does not need to be controlled.

We deliver the K8S cluster configuration with the assembly. Therefore, the next step is to update it: configMaps, deployments, services, secrets, and any other K8S configurations that we changed.

After that, Helm starts RollOut updating the application itself in a test environment. Before the application is deployed to the prode. This is done so that users manually check the business features that we put in the test environment.

5. Automatically Deploy All Changes to Prod


To deploy the update to the product environment, all that remains is to press one button in GitLab - and the containers are immediately delivered to the product environment.

The same application can work without rebuilding in different environments - test and prode. We use the same artifacts, without changing anything in the application, and set the parameters from the outside.

Flexible parameterization of application settings depends on the environment in which this application will be executed. We carried out all the environment settings outside: everything is parameterized through the K8S configuration and the Helm parameters. When Helm expands an assembly to a test environment, test parameters are applied to it, and product settings are applied to the product environment.

The most difficult thing was to parameterize all the services used and variables that depend on the environment, and translate them into environment variables and the configuration parameters of the environment parameters for Helm.

Application parameters use environment variables. Their values ​​are specified in containers using the K8S configmap, which is template using Go templates. For example, setting the environment variable to the title of a domain can be done like this:

 APP_EXTERNAL_DOMAIN: {{ (pluck .Values.global.env .Values.app.properties.app_external_domain | first) }} 

.Values.global.env - the environment name is stored in this variable (prod, stage, UAT).
.Values.app.properties.app_external_domain - in this variable we set the required domain in the .Values.yaml file

When the application is updated, Helm creates the file configmap.yaml from the templates and fills the APP_EXTERNAL_DOMAIN value with the desired value depending on the environment in which the application update starts. This variable is already affixed to the container. Access to it is from the application, respectively, in each environment of the application there will be a different value of this variable.

Relatively recent support for K8S appeared in Spring Cloud, including work with configMaps: Spring Cloud Kubernetes . While the project is actively developing and changing dramatically, we can not use it in the sale. But we actively monitor its state and use it in the DEV configurations. As soon as it stabilizes, we will switch from using environment variables to it.

Total


So Continuous Deployment is up and running. All updates occur at the touch of a key. Delivery of changes to the food environment is automatic. And, importantly, the updates do not stop the system.



Plans for the future: automatic base migration


We are thinking about upgrading the base and the possibility of rolling back these changes. After all, at the same time two different versions of the application work: the old one works, and the new one rises. And we will turn off the old one only when we make sure that the new version is working. Database migration should allow working with both versions of the application.

Therefore, we can not just change the name of the column or other data. But we can create a new column, copy the data from the old column into it, and write the triggers that, when updating the data, will simultaneously copy and update them in another column. And after a successful deployment of the application, after the period of post launch support, we will be able to remove the old column and the trigger that became unnecessary.

If the new version of the application does not work correctly, we can roll back to the previous version, including the old version of the database. In short, our changes will allow working simultaneously with several versions of the application.

We are planning to automate the database migration through the K8S job, integrating it into the CD process. And be sure to share this experience on Habré.

Source: https://habr.com/ru/post/447812/


All Articles