📜 ⬆️ ⬇️

Continuous integration and deployment of Docker in GitLab CI

In this tutorial, we’ll look at how to configure the continuous integration and deployment of a Flask application on a Docker Swarm through GitLab CI.

First we will look at setting up the working environment, including creating servers for Docker Swarm nodes. Then create a simple Flask application with Redis and prepare GitLab CI for continuous delivery.

Container management and load management is not a trivial topic and requires considerable preparation, especially if you plan to self-tune your orchestration system (for example, using Kubernetes). However, there are tools such as Docker Swarm or Rancher , which take control of containers, internal networks and load distribution, and make it possible to deploy a scalable system on their own servers.

In addition, GitLab supports Docker well and allows you to connect your own image storage systems (GitLab Registry) in a few simple steps. And also to track the status of the project and to manage the deployed versions of the application, allowing, if necessary, roll back to the past in one click.
')

Prerequisites


Before starting, you must ensure that the following conditions are met:


The official guide to setting up https in gitlab.

Server installation


To configure and run the application in Swarm mode, we need 2 types of servers - a manager and a slave. In addition, an additional one server will be allocated under GitLab Runner to perform the task of assembling and running containers.

We will create virtual servers on the Vscale platform, however, if you are a user of another service, for example, DigitalOcean , then the actions will be similar to those described below.

Create three new servers from the Docker image:


Created virtual machines
We will use the obtained IP addresses further.

Protection of the docker service on the management server with a self-signed certificate


Note. The use of TLS and certification authority management is a topic that requires considerable preparation. It is advisable to familiarize yourself with the technologies OpenSSL, x509 and TLS before using them in real projects.

At the final stage of application deployment in a working Swarm environment, you need a secure connection between GitLab Runner and the Docker service running on the Manager server, which can be seen in the diagram below:

Server chart
The process of deploying an application in a production environment.

To do this, you need to use a single client certification center (GitLab Runner) and the Docker service (Manager). After creating a certificate and key for a client, you can use them to remotely connect to the Docker service and perform various operations. You should be as careful as possible with the storage of client certificates and keys, since they provide complete control over the Docker service.

Updating the Docker service on the Manager machine


We will connect via SSH to the Manager server and update Docker, because in the future we will need additional features available in the older version of the Docker API. Add a repository from Docker developers to get the latest version:

$ apt-get update $ apt-get install \ apt-transport-https \ ca-certificates \ curl \ software-properties-common $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - $ add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" $ apt-get update 

Install the latest version of Docker:

 $ apt-get install docker-ce 

To check the installation, run the command:

 $ docker version 

The output of the command may differ, as long as both versions of the client and server API are more than 1.24:

 Client: Version: 17.09.0-ce API version: 1.32 Go version: go1.8.3 Git commit: afdb6d4 Built: Tue Sep 26 22:42:18 2017 OS/Arch: linux/amd64 Server: Version: 17.09.0-ce API version: 1.32 (minimum version 1.12) Go version: go1.8.3 Git commit: afdb6d4 Built: Tue Sep 26 22:40:56 2017 OS/Arch: linux/amd64 Experimental: false 

Creating a certificate and keys


Staying in the Server Manager , let's start creating a certification center (CA). Go to the new directory:

 $ mkdir certificates $ cd certificates 

First you need to create a private and public RSA-key for CA (you will need to come up with a code word of at least 4 characters in length):

 $ openssl genrsa -aes256 -out ca-key.pem 4096 Generating RSA private key, 4096 bit long modulus ...............................................................................................................................................................................................................................++ ..................................................++ e is 65537 (0x10001) Enter pass phrase for ca-key.pem: Verifying - Enter pass phrase for ca-key.pem: 

Let's start creating a local certification center. The data associated with the identification center will be requested. At the stage of entering the full qualified domain name (FQDN), you need to enter the domain name of the host for which the Manager server is available, but for the purposes of the example (do not use a similar method in the working machines!) We use the word manager to designate the server:

 $ openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem Enter pass phrase for ca-key.pem: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg server FQDN or YOUR name) []:manager Email Address []:your@email.com 

Next, create a private key for the server:

 $ openssl genrsa -out server-key.pem 4096 

Now we have a certification center and we can create an SSL certificate signing request (CSR). The CN (Common Name) field must match the FQDN value used in the previous step:

 $ openssl req -subj "/CN=manager" -sha256 -new -key server-key.pem -out server.csr 

Additionally, you must specify the IP address of the Server Manager (the machine in which we are currently working):

  $ echo subjectAltName = DNS:manager,IP:{ IP-} >> extfile.cnf 

Create a signed key for the server:

 $ openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem \ -CAcreateserial -out server-cert.pem -extfile extfile.cnf 

Create a client key that we will use to access the Docker service:

 $ openssl genrsa -out key.pem 4096 

Create a signature request and additionally specify the type of key usage - for authorization:

 $ openssl req -subj '/CN=client' -new -key key.pem -out client.csr $ echo extendedKeyUsage = clientAuth >> extfile.cnf 

Get the signed client key:

 $ openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem \ -CAcreateserial -out cert.pem -extfile extfile.cnf 

Now you can delete the request files:

 $ rm -v client.csr server.csr 

As a result, we received the following files:

 $ ls ca-key.pem ca.srl extfile.cnf server-cert.pem ca.pem cert.pem key.pem server-key.pem 

Let's leave the terminal with the session on the Manager server open, since we will need it later.

Now we have everything needed to set up secure access between the GitLab Runner and the working Swarm environment.

Setting secret variables in GitLab CI


We will not store client key information on a Runner machine for security reasons. For such tasks, GitLab CI implements the function of secret environment variables.
Create a new project in GitLab:

New project in GitLab

After creating the project, go to the CI / CD settings:

CI / CD

Open the secret variables area and work with it:

Secret variables

We need to add and save three variables with the following file names and values ​​that we created in the previous step:


Let's go back to the terminal with the session on the Manager server and execute the command:

 $ cat ca.pem ( ) -----BEGIN CERTIFICATE----- MIIFgTCCA2mgAwIBAgIJAMzFvrYTSMoxMA0GCSqGSIb3DQEBCwUAMFcxCzAJBgNV BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX ... bI9XGs39F+r8Si5y6oHqkZHMpRX631i2KRA6k4jBPrZrS0MH3OwsCobuat5T1ONH Kx7TFZSuFO25XIut1WucVn5yPWLTKRniMV7dVws9i9x9Sp2Iamk+w2x1GPO6bHtr BWqdORkUEWMs+DTgX2J989AFh7gnYwHZ2Bo7HKlC6IbOlol7b2E/5p7hWrpe7sf+ oQDn1bhgoauhq2AL4BysJfA3uHoA -----END CERTIFICATE----- 

Copy this value and add a new secret variable in GitLab:

image

Now, using this example, add the remaining values ​​of cert.pem and key.pem .

Configuring the Docker service on the Managing Server Manager


By default, only the user who owns the process has access to the Docker service. To perform application deployment operations from a remote host, we need to allow connection from outside, while using the TLS protocol. We have already received the necessary certificates and keys, it remains to configure Docker to work with them.

We will create a separate configuration file explicitly prescribing the hosts for which Docker will be available, so first we need to remove the standard -H parameter, which prescribe the host. To do this, create a new directory, docker.service.d , in which we will override the parameters for starting the service:

 $ mkdir -p /etc/systemd/system/docker.service.d 

Create a customization file:

 $ nano /etc/systemd/system/docker.service.d/exec-start.conf 

Add the following section, for the ExecStart parameter you first need to clear the previous values ​​and then specify new ones:

 [Service] ExecStart= ExecStart=/usr/bin/dockerd 

Create a new configuration file:

 $ nano /etc/docker/daemon.json 

And we will write the following text defining the use of the TLS protocol to access the Docker service, as well as the location of the server key and certificate:

 { "hosts": ["tcp://0.0.0.0:2376","fd://"], "tlsverify": true, "tlscacert": "/root/certificates/ca.pem", "tlscert": "/root/certificates/server-cert.pem", "tlskey": "/root/certificates/server-key.pem" } 

For the changes to take effect, restart the Docker service:

 $ systemctl daemon-reload $ service docker restart 

Now we can connect via TLS to our Docker service at

 [IP  ]:2376 

Activate Swarm mode


To begin with, we recommend studying materials on the Swarm mode, for example, on the official Docker website. Swarm is a cluster of Docker services located on different physical or virtual machines and behaving as a whole.
The distribution of requests between the existing Docker services is done according to the ingress load balancing scheme, the essence of which is that any request passes through the internal balancing mechanism and then is redirected to the service that can serve the request at the moment.
Scaling is accomplished by specifying the number of replicas of internal services that we will encounter later.
We will activate the Docker Swarm mode on the Manager server on which the cluster manager will be located. Then we will add the Docker subordinate service from the Node1 machine.

In the terminal with an open session on the Server Manager, execute the command:

 $ docker swarm init Swarm initialized: current node (r1mbxr2dyuf48zpm5ss0kvwv7) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-5ihkl37kbs13po7htnj9dzzg3gex4i6iuvjho7910crd0hv895-36jw5epwcw3xwpzmqf1mqgod2 { IP-}:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. 

As you can see from the message, the current Docker service has become a manager and is ready to add subordinate hosts through the execution of the specified command. Copy this command and connect to Node1 server via SSH, to add its addition to Swarm:

 $ docker swarm join --token SWMTKN-1-1lhmuomvb060rnom4jqj8gxc565f4wgwadjs9ucvqx6huwvbfc-6vt1ljdhldxtetjv2hnct7sh4 { IP-}:2377 

The result of the successful execution of the command should be the message:

 This node joined a swarm as a worker. 

The next step is to set up a working server, which will do all the work from GitLab CI.

Configure Gitlab Runner


The final step in setting up a continuous integration and deployment environment with Docker is to connect the GitLab CI working service, which will be used to complete all the application building and testing.
You can use joint services to perform work, but in this guide we will consider creating your own service on the Runner server created earlier.

Connect via SSH to the Runner server. First you need to install GitLab Runner and connect this server to GitLab.
Add the GitLab developer repository:

 $ apt update $ apt install curl $ curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh | bash $ apt-get install gitlab-runner 

Let's go back to the GitLab web interface to get the URL and registration-token. Let's go back to the project settings in the CI / CD section, where we left off last time. Open the Runner settings section:

Runner settings
In the Runner settings section there is information on connected working hosts: private and open:

Runner Settings

The Specific Runners block has the necessary URL and registration token values.

Let's return to the terminal with the session on the Runner server and replace the values ​​with our own execute the command:

 $ gitlab-runner register -n \ --url http:// { IP-}/ \ --registration-token _Kof1SxCHzVNcwuZZEwx \ --executor docker \ --description "Docker Prod Runner" \ --docker-image "docker:latest" \ --docker-privileged \ --tag-list docker Registering runner... succeeded runner=_Kof1SxC Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! 

Now we have a working host registered, which will do all the work sent by GitLab CI.
It remains to configure SSH access for the project Git-repository on GitLab.
Create private and public keys by replacing your email-address and leaving all requested values ​​by default:

 $ ssh-keygen -t rsa -b 4096 -C "your_email@example.com" 

Next, you need to add information about the GitLab server where the Git repository is located to the list of known hosts to prevent errors during the connection (the IP address needs to be replaced with the IP address of the GitLab server available in the control panel):

 $ ssh-keyscan -t rsa { IP-} >> .ssh/known_hosts 

Then you need to add the public key in GitLab to allow connections from the Runner server. Copy the key value:

 $ cat .ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDMY+G6rmx+AZ6Ow1lVr+4ox0HaAaV4xwthCS3ucyI3KsXVV+ltLU0zkFOP8WZoTXCHo38Fpcu5KwYe3V6L/hZ26fNse6WhJ6EvRmVx7wVHyixhpzKj6Jp9zzAf24SbtrjGgNtN4ASKyouU///a3+gtM+eWQYdxavz6wlJ0xgm8MnDqpbCUv1M7IRWsKhejA9vLSXjfdkxQnxVSCOT/FXb/eDsGRTs7WMXYapqeQ0msTXDCFlNDVQaRWZagXpHLRDkeOhRE2rJ6daj8YNKKx0jaatRKIsICqwUljvPgsrnpF9FiUg8n8PTWyYbz3VpwUoIPnFiFXvbgIn8xLb2/4QkFDoZUgyLI+VgrmZmd0HZPWvW5QbMLZ8vwb/Izi0TG/+qoMm8jas0RaUUp18rQAc4GmCLRsFbzN3DsnME31xFa0y/pwA3LK9ptIRivYq82uP5twq0jXpMSji8w+No7kBI5O9VUHmbRYYYWpn+jeKTxmoVORsrCHpAT7Cub0+Ynyq1M7Em0RMqZgdzLsP9rlLwRkc6ZEgqpVQHDZgwJsnQ5qo/6lr18bD9QHSe5t+SSnUbnkmXkp0xb0ivC4XayxCjYVIOoZV2cqyGa+45s7LY+ngPk0Cg+vSMHV8/enEwu1ABdpoGVjaELJOtw1UBr4y9GCyQ0OhKnrzWmqL6+HnEMDQ== your_email@example.com 

And add it to GitLab:

Adding a key to GitLab

Let's move on to creating and completing a simple Flask application that uses two additional services: Nginx to route requests and Redis to store the page's hit counter.

Connecting Docker Registry to GitLab


Before creating the application, we need to activate the Docker image storage feature in our GitLab application in order to effectively manage the deployed versions of the application and provide the ability to roll back to previous versions.

Connect via SSH to the GitLab server and open the configuration file:

 $ nano /etc/gitlab/gitlab.rb 

Next, add a new address from where the Docker’s storage will be available, replacing example.com with the name of your host:

 registry_external_url 'https://gitlab.example.com:4567' 

Add two more lines with the location of the certificate and key storage. Since we are using the same domain name as the main GitLab application (if you have not configured HTTPS for GitLab yet, you can do it now ):

 registry_nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.example.com/fullchain.pem" registry_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.example.com/privkey.pem" 

Restart the GitLab service to activate the repository:

 $ gitlab-ctl reconfigure 

Let's move on to the flask-docker-swarm project created earlier in GitLab. In case of successful activation of the repository, the Registry section will be available in the project menu:

Registry Check

For further work with the internal image repository, we will need to use a bunch of login and password to connect to it. We use the ability to add secret variables in GitLab to add a password. To do this, go to the Settings section and select the CI / CD block:

Adding secret variables

Open the Runner settings section and add a new variable HUB_REGISTRY_PASSWORD , the value of which is the password from the GitLab user account:

Password setting

Create application


The program will be a simple web application that counts the number of visits and displays information about the container in which it is running. To do this, we need to create several Dockerfile (Docker image configuration files) for each service used (Nginx, Redis, Flask) and specify how they should interact with each other.

Open the page of the project created earlier:

Project page

Run the following command to clone the repository and go to the working directory, replacing the domain name:

 $ git clone git@gitlab.example.ru:root/flask-docker-swarm.git $ cd flask-docker-swarm 

Create inside three directories for each service:

 $ mkdir nginx $ mkdir web $ mkdir redis 

Creating Nginx Service


For the Nginx service, we will need to create three files - a Dockerfile and two configuration files. Create and record general settings for the entire web server:

 $ nano nginx/nginx.conf 

And write the following text:

 #       Nginx  user nginx; #    ,   - #     worker_processes 1; #          error_log /var/log/nginx/error.log warn; #   ,     PID  Nginx- pid /var/run/nginx.pid; events { #      worker_connections 1024; } # http       Nginx c http- http { #         include /etc/nginx/mime.types; #    - default_type text/html; #   - log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #         Nginx access_log /var/log/nginx/access.log main; #       sendfile on; tcp_nopush on; tcp_nodelay on; #       keepalive_timeout 65; #    gzip    #gzip on; #       include /etc/nginx/conf.d/*.conf; } 

The following file defines the parameters for the virtual server, directly here is the link to the web service where our application will run:

 $ nano nginx/flask.conf 

And write the text:

 #  server    / server { #   , IP  / ,     listen 80 default_server; # server_name xxx.yyy.zzz.aaa #      “Content-Type”    charset utf-8; #  Nginx        #location /static { # alias /usr/src/app/web/static; #} #  Nginx   -      (Gunicorn (WSGI server)) location / { #       proxy_pass http://web:5000; #       proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #      client_max_body_size 5M; client_body_buffer_size 5M; } } 

The block with static content is commented out because there will be no additional files in our application, but in the future you can add a static directory in the web service and open this block to efficiently transfer static files.

Create a Dockerfile in which we use the finished Nginx image in DockerHub and modify it to use our customization files:

 $ nano nginx/Dockerfile 

And write the following text:

 FROM nginx:1.13.6 RUN rm /etc/nginx/nginx.conf COPY nginx.conf /etc/nginx/ RUN rm /etc/nginx/conf.d/default.conf COPY flask.conf /etc/nginx/conf.d 

Creating a Redis service


For the Redis service, create a Dockerfile:

 $ nano redis/Dockerfile 

With simple content:

 FROM redis:3.2.11 

We do not make additional changes, but in the future they are quite possible, so we will create a separate service.

Creating a service with the Flask application


We start the service with the Flask application by creating the main executable file:

 $ nano web/main.py 

Insert the following code:

 from flask import Flask from redis import Redis, RedisError import os import socket # Connect to Redis redis = Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2) app = Flask(__name__) @app.route("/") def hello(): try: visits = redis.incr("counter") except RedisError: visits = "<i>cannot connect to Redis, counter disabled</i>" html = "<h3>Hello {name}!</h3>" \ "<b>Hostname:</b> {hostname}<br/>" \ "<b>Visits:</b> {visits}" return html.format(name=os.getenv("NAME", "world"), hostname=socket.gethostname(), visits=visits) if __name__ == "__main__": app.run() 

Create a separate file with the dependencies used:

 $ nano web/requirements.txt 

And we list the software packages used:

 Flask==0.12.2 Redis==2.10.6 Gunicorn==19.7.1 Nose2 Coverage 

Set up a simple unit test example for a demonstration in which we check the accessibility of a page. Create a test file:

 $ nano web/test_smoke.py 

Copy and paste the text:

 import os import unittest from main import app class BasicTests(unittest.TestCase): # executed prior to each test def setUp(self): app.config['TESTING'] = True app.config['WTF_CSRF_ENABLED'] = False app.config['DEBUG'] = False self.app = app.test_client() self.assertEqual(app.debug, False) # executed after each test def tearDown(self): pass def test_main_page(self): response = self.app.get('/', follow_redirects=True) self.assertEqual(response.status_code, 200) if __name__ == "__main__": unittest.main() 

Create a file that will be the entry point of our application:

 $ nano web/wsgi.py 

In which we specify the name of the imported object that Gunicorn will use:

 from main import app if __name__ == "__main__": app.run(host='0.0.0.0') 

The last file in the web directory will be the Dockerfile, which will list the commands for creating an image of our service:

 $ nano web/Dockerfile 

With the following content:

 FROM python:3.6.3 RUN groupadd flaskgroup && useradd -m -g flaskgroup -s /bin/bash flask WORKDIR /app ADD . /app RUN pip install -r requirements.txt 

Creating services based on Docker containers


To create managed services, we use the docker-compose tool, which allows you to specify on which image the container is launched and determines the behavior of the service as a whole. To do this, create a docker-compose.yml file :

 $ nano docker-compose.yml 

And we write this text, replacing the domain name:

 version: "3.4" services: web: image: gitlab.example.ru:4567/root/flask-docker-swarm/web:${CI_COMMIT_SHA} deploy: replicas: 4 restart_policy: condition: on-failure command: gunicorn -w 3 --bind 0.0.0.0:5000 wsgi:app nginx: image: gitlab.example.ru:4567/root/flask-docker-swarm/nginx:${CI_COMMIT_SHA} deploy: mode: global restart_policy: condition: on-failure ports: - "80:80" redis: image: gitlab.example.ru:4567/root/flask-docker-swarm/redis:latest deploy: replicas: 1 placement: constraints: [node.role == manager] restart_policy: condition: on-failure ports: - "6379" 

Let's take a closer look at the file structure:


To run on a local machine, we need to create an additional docker-compose configuration file for easy testing of the application without using the Nginx service:

 $ nano docker-compose.override.yml 

Insert the following text:

 version: "3.4" services: web: image: web environment: - FLASK_APP=wsgi.py - FLASK_DEBUG=1 build: context: ./web dockerfile: Dockerfile command: 'flask run --host=0.0.0.0' links: - redis ports: - "5000:5000" volumes: - ./web/:/usr/src/app/web redis: image: redis build: context: ./redis dockerfile: Dockerfile ports: - "6379:6379" 

By structure, the file resembles the main version, but does not use the Nginx service and there is an additional section for building build containers.

GitLab CI is managed via the configuration file:

 $ nano .gitlab-ci.yml 

Let's write the following text, changing example.ru to our domain name:

 image: docker:17.09.0-ce services: - docker:dind before_script: - apk add --update py-pip && pip install docker-compose stages: - test - build - deploy - stage unittests: stage: test script: - cd web - pip install -q -r requirements.txt - nose2 -v --with-coverage tags: - docker docker-build: stage: build script: - docker login -u root -p $HUB_REGISTRY_PASSWORD https://gitlab.example.ru:4567/ - docker build -t gitlab.example.ru:4567/root/flask-docker-swarm/nginx:$CI_COMMIT_SHA ./nginx - docker push gitlab.example.ru:4567/root/flask-docker-swarm/nginx:$CI_COMMIT_SHA - docker build -t gitlab.example.ru:4567/root/flask-docker-swarm/web:$CI_COMMIT_SHA ./web - docker push gitlab.example.ru:4567/root/flask-docker-swarm/web:$CI_COMMIT_SHA - docker build -t gitlab.example.ru:4567/root/flask-docker-swarm/redis:latest ./redis - docker push gitlab.example.ru:4567/root/flask-docker-swarm/redis:latest tags: - docker deploy-to-swarm: stage: deploy variables: DOCKER_HOST: tcp://{manager_ip_address}:2376 DOCKER_TLS_VERIFY: 1 DOCKER_CERT_PATH: "/certs" script: - mkdir -p $DOCKER_CERT_PATH - echo "$TLSCACERT" > $DOCKER_CERT_PATH/ca.pem - echo "$TLSCERT" > $DOCKER_CERT_PATH/cert.pem - echo "$TLSKEY" > $DOCKER_CERT_PATH/key.pem - docker login -u root -p $HUB_REGISTRY_PASSWORD $CI_REGISTRY - docker stack deploy -c docker-compose.yml env_name --with-registry-auth - rm -rf $DOCKER_CERT_PATH environment: name: master url: http://{manager_ip_address} only: - master tags: - docker 

DOCKER_HOST URL , Docker. Registry GitLab.

$CI_COMMIT_SHA . Redis latest .

:



. . Docker ( Ubuntu 16.04, Windows ). Docker :

 $ sudo apt-get update $ sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ software-properties-common $ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - $ sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" $ sudo apt-get update 

Docker:

 $ sudo apt-get install docker-ce 

:

 $ sudo docker-compose -f docker-compose.override.yml build 

, :
 $ sudo docker-compose -f docker-compose.override.yml up 

:

 localhost:5000 

, :



, , hostname , , web , -.


— . :

 $ git add --all $ git commit -m “init” $ git push origin master 

, “” GitLab, Docker, Docker Swarm.

GitLab Pipelines :

Pipelines!

, Jobs , .

Environments :

Environments

. master , .
, — master , :



Rollback , deploy .

, , , .

. Manager :


, , hostname 4 , , web .
Docker Swarm . web . Manager , ( docker stack deploy) :

 $ docker stack ps env_name 

GitLab CI Docker-. , — , Docker Swarm Docker.

Source: https://habr.com/ru/post/344324/


All Articles