📜 ⬆️ ⬇️

Docker, GitLab, free SSL certificates and other modern web development buns

Hello again! For almost five years I have not written new articles here, although, to be honest, I always knew that sooner or later I would start to do it again. I do not know about you, but all the same, this business has always seemed quite fascinating.


Starting writing new material after such a long rest from this business is the most difficult thing. But once the goal is set - you must go to the end. I'll start a little from afar.


Throughout my adult life, web development has been and remains my main activity. That is why, I confess at once that this material should be perceived as an attempt to build a Docker cluster from an amateur system administrator, but not a professional one. In this article I cannot claim an expert opinion in clustering and even, moreover, I myself want to check the authenticity of my own experience.


Under Habrakat, you will find Quick Start on using Docker at the level required for solving specific tasks indicated below, without going into the jungle of virtualization and other related topics. If you still want to begin to successfully use this modern technology, thereby greatly simplifying a number of processes: from the development of web products to the deployment and transfer of these under any modern equipment - I ask for cat!


Opening Illustration - Docker


Preamble


We begin, of course, with the formulation of the problem and the definition of the main technologies / methods used in the guide.


From the very beginning I was interested in Docker to quickly create a small, but rather universal cluster for my own projects (work, school, etc). Since I was not going to be professionally involved in system administration, I decided that I should learn the basics of clustering right up to the moment when I could easily deploy any popular software stack for a web project. Next, I will look at deploying the following configurations on Docker:



The first two in the presentation, I think, do not need. The third one consists of MongoDB , Express.js , Node.js. I most often used MEAN for writing a RESTful API , for example, for further developing a mobile application based on it.


After that, I made the task a bit harder for me by adding the following requirements:


  1. The ability to easily use different domains (or, often, subdomains) for each individual container (according to the principle of virtual hosts ).
  2. Use HTTPS protocol by default. Moreover, I would like to organize the free generation of SSL certificates that are not inferior to the paid counterparts.
  3. Deploying on the same GitLab CE server - as the main CVS system for working on projects not only alone, but also in a team.

Basic definitions:



Installation and Setup


Problems installing Docker and other packages should not arise. On the official website, this process is described in some detail. Next, I will sign the general list of commands required for the initial setup.


I’ll clarify right away that in this article I’ll look at the configuration of Docker and all related programs on the CentOS 7 distribution, since I’ve been used to working on this OS for a long time as on the main server system. In general, on any other Linux distribution, the actions will be approximately the same, with the only difference that, for example, for Ubuntu you will use apt-get instead of yum / dnf (for CentOS / Fedora).


Docker + Docker Compose:


Training:


$ sudo yum update
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF


Installing Docker Engine :


$ sudo yum install docker-engine
$ sudo systemctl enable docker.service
$ sudo systemctl start docker


Creating a docker user group and adding the current user there (this is necessary in order to work with Docker without using sudo or root access):


$ sudo groupadd docker
$ sudo usermod -aG docker your_username


Verifying successful installation:


$ docker run --rm hello-world


Install Docker Compose (a utility to merge several containers into one web application):


$ sudo curl -L "https://github.com/docker/compose/releases/download/1.9.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
$ docker-compose --version


Certbot ( official site ):


A utility for automatically obtaining / updating SSL certificates from Letsencrypt :


Before installation, you must enable the EPEL repository , if this has not been done before.


$ sudo yum install certbot


Docker Engine Basics


Docker engine


Basic principles:


Docker is an extra layer of abstraction; a system that automates virtualization at the operating system level .


" Operating system - level virtualization is a virtualization method in which the core of the operating system supports several isolated instances of user space instead of one. These instances (often called containers or zones) are completely identical to the real server from the user's point of view. The kernel ensures complete isolation of containers, therefore programs from different containers cannot affect each other. "

From wikipedia

Main advantages of using Docker:



Next, consider the basic commands that we need to create a cluster:


$ docker run


In essence, the main team that runs the new container.


Main settings:


  1. --name : UUID identifier: unique container name;
  2. --volume (-v) : volume associated with the container: set in the form of an absolute path to the directory;
  3. --env (-e) : environment variable: allows additional customization of the launched container;
  4. --publish (-p) : configure certain ports required for the operation of the container (for example, 80 for http, 443 for https).

$ docker ps


The command with which you can get a list of running containers.


$ docker stop container-name


The team that stops the operation of the container.


$ docker rm container-name


Delete a specific container.


Warning : before removing the container, you must stop it ( docker stop )!


In more detail you can understand the work of each team in the official documentation . In this article, I drew attention only to the basic commands needed to successfully start working with Docker.


You will see specific examples of using the docker run in this article a little further.


Configuring virtual hosts


Nginx Reverse Proxy


Problem: a certain complexity of cluster implementation using virtual hosts in different containers is that one port can be “listened” by only one container (configured via --publish). It turns out that by default we can create only one container that will respond to requests to the server through port 80 and / or 443 (http and https protocols, respectively).


Solution: in principle, it is rather obvious to use a Reverse Proxy encapsulated in one container to solve this problem, which will listen on ports 80 and 443. The functionality of this container will be to automatically redirect requests, in accordance with the virtual hosts used.


Such a container exists in open access in Docker Hub - nginx-proxy .


In addition to solving the problem with virtual hosts, by default it supports work with SSL certificates, which allows you to deploy support for HTTPS-protected access to the site.


Before starting this Reverse Proxy container, let's get SSL certificates for the domains that we want to use as virtual hosts.


Getting a free SSL certificate


To obtain an SSL certificate, we will use the free service letsencrypt . To do this, we have already installed the certbot utility in the previous steps. I will not dwell on the details of using this utility (this is all in the official documentation ).


I’ll just give you a ready-made command for automatically obtaining a free SSL certificate for your domain:


$ sudo certbot certonly -n -d yourdomain.com --email your@email.com --standalone --noninteractive --agree-tos


--standalone --noninteractive --agree-tos - these parameters are necessary for certbot to work on the one hand in the background, and on the other hand, to generate a certificate without a specific binding to a specific web server.


As a result of the successful execution of this command, two files will be generated:


/etc/letsencrypt/live/yourdomain.com/fullchain.pem


/etc/letsencrypt/live/yourdomain.com/privkey.pem


For nginx-proxy to work correctly, we need to place all certificate files in one directory, while using two files for each domain name in the format: yourdomain.com.crt (certificate file) and yourdomain.com.key (private key).


In this case, it is logical to use symbolic links. Example:


$ mkdir ssl-certs


$ cd ssl-certs


$ ln -s /etc/letsencrypt/live/yourdomain.com/fullchain.pem ./yourdomain.com.crt


$ ln -s /etc/letsencrypt/live/yourdomain.com/privkey.pem ./yourdomain.com.key


There is no need to pay special attention to the .pem extension - the essence of the files does not change.


Similarly, we can get certificates for any domain names we own and then use them as virtual hosts. The only requirement is that the A-records of these domain names must be directed to the external IP address of the server on which you are certbot certonly ...


Having generated certificates for each domain, we are ready to launch the nginx-proxy container.


 $ docker run -d -p 80:80 -p 443:443 \ ​-v /full/path/to/ssl-keys:/etc/nginx/certs \ ​-v /var/run/docker.sock:/tmp/docker.sock:ro \ ​jwilder/nginx-proxy 

Consider this command in more detail:


  1. -p 80:80 -p 443:443 - we bind ports 80 and 443 to the container. The 80th port of the server corresponds to the 80th port inside the container and similarly with port 443. In the PORT: PORT2 format, the correspondences are created between the real port the entire machine and ports inside a separate virtual container.
  2. -v /full/path/to/ssl-keys:/etc/nginx/certs - the first volume needed to configure this container. Here we link the standard / etc / nginx / certs directory inside the container itself to the directory into which we manually placed symbolic links to the certificate and private key files for our domains (in the previous step).
  3. jwilder/nginx-proxy is the container identifier inside the Docker Hub. The Docker Engine automatically downloads an image of this container if it has not been loaded before.

That's it - the first container is running! And this container is Reverse Proxy , through which we can further set any container application VIRTUAL_HOST .


Examples of working with different stacks


LAMP


So, finally, we can proceed to launching containers in which we can already develop our web applications.


In the Docker Hub base there are a lot of different versions of LAMP containers. Personally, I used this: tutum-docker-lamp .


Earlier, in addition to the Docker Engine, we installed the Docker Compose utility. And only from this moment we begin to use it. Docker Compose is convenient for creating applications in which several containers are combined and jointly represent the application being developed.


In order to run this container in conjunction with our nginx-proxy you need:


  1. Download the tutum-docker-lamp source code into a separate directory (the most convenient way to do this is using git clone );


  2. Create a docker-compose.yml file in this working directory with something like this:

 web: ​build: . volumes: - ./www:/var/www/html environment: - MYSQL_PASS=yourmysqlpassword - VIRTUAL_HOST=yourdomain.com 

  1. Run it with docker-compose:


    $ docker-compose up



As you can see in this example, the management of virtual hosts using nginx-proxy is performed using just one environment variable VIRTUAL_HOST .


Focus on a bunch of ./www:/var/www/html . Obviously, the working directory of your site becomes the www folder (you must create it manually). All files in this directory automatically fall into /var/www/html inside the running container.


You can get a better understanding of the syntax of the docker-compose.yml configuration file in the official documentation .


Lemp


Running a LEMP container is basically no different from the example above.


First we find the container in the Docker Hub. For example: docker-lemp .


Download the source of the container and add docker-compose.yml . Inside this settings file of our custom container, you can not only set the VIRTUAL_HOST environment variable , but also configure everything that the Dockerfile allows. For example, the Dockerfile defines:


VOLUME /var/www/


Therefore, you can do a bundle with this volume in docker-compose.yml like this:


volumes:
- - ./www:/var/www


NodeJS + ExpressJS + MongoDB


An example of this configuration: docker-nodejs-mongodb-example .


The docker-compose.yml file looks like this:


 web:​ build: .​ volumes:​ - "./api:/src/app"​ environment:​ - VIRTUAL_HOST=yourdomain.com​ links:​ - "db:mongo" db: ​image: mongo ​ports:​ - "27017:27017" ​volumes:​ - ./data/db:/data/db 

In this case, two linked containers will be created. One for the base (mongoDB), the second for the NodeJS application itself.


To run this bundle of containers, the same docker-compose up .


Subtleties in working with gitlab / gitlab-ce


GitLab CE on Docker Engine


Some more complex containers require additional configuration to run using nginx-proxy. These containers include gitlab-ce .


I will first give a fully working version of the command to run this container, taking into account the configuration discussed in this article, and further I will explain below some of the details from this command.


So:


 $ docker run --detach \ --hostname gitlab.yourdomain.com \ --publish 2289:22 \ --restart always \ --name custom-gitlab \ --env GITLAB_OMNIBUS_CONFIG="nginx['listen_port'] = 80; nginx['listen_https'] = false; nginx['proxy_set_headers'] = { \"X-Forwarded-Proto\" => \"https\", \"X-Forwarded-Ssl\" => \"on\" }; gitlab_rails['gitlab_shell_ssh_port'] = 2289; external_url 'https://gitlab.yourdomain.com'; gitlab_rails['smtp_enable'] = true; gitlab_rails['smtp_address'] = 'smtp.mailgun.org'; gitlab_rails['smtp_port'] = 2525; gitlab_rails['smtp_authentication'] = 'plain'; gitlab_rails['smtp_enable_starttls_auto'] = true; gitlab_rails['smtp_user_name'] = 'postmaster@mg.yourdomain.com'; gitlab_rails['smtp_password'] = 'password'; gitlab_rails['smtp_domain'] = 'mg.yourdomain.com';" \ --env VIRTUAL_HOST="gitlab.yourdomain.com" \ --volume /srv/gitlab/config:/etc/gitlab \ --volume /srv/gitlab/logs:/var/log/gitlab \ --volume /srv/gitlab/data:/var/opt/gitlab \ gitlab/gitlab-ce:latest 

Run via NGINX Reverse Proxy + HTTPS


In order for the scheme with Reverse Proxy to work in this case, you need to add:


 nginx['listen_port'] = 80; nginx['listen_https'] = false; nginx['proxy_set_headers'] = { \"X-Forwarded-Proto\" => \"https\", \"X-Forwarded-Ssl\" => \"on\" }; 

The reason is that when working with containers, nginx-proxy addresses port 80 rather than 443 inside them. Without additional headers, the nginx configuration inside the gitlab-ce container ( proxy_set_headers ) will not pass the request (error 502 "Bad Gateway" ) .


In addition, it is important to add:


external_url 'https://gitlab.yourdomain.com';


Port 22


The essence of these lines:


--publish 2289:22


If the work with the working machine is performed via the SSH protocol, then we cannot create a bundle directly "22:22", since port 22 is already occupied by the sshd service.


The solution to this problem is described in the official gitlab-ce documentation. It's simple: we bind any other (except 22) port inside the server to port 22 inside the container. This example uses port 2289.


In parallel with this, it is important not to forget to add


gitlab_rails['gitlab_shell_ssh_port'] = 2289;


In the settings of the GitLab itself.


Thus, after running gitlab-ce and creating some kind of repository in it, it will work with the following address:


ssh://git@gitlab.yourdomain.com:2289/username/repository_name.git


SMTP server setup


Here, too, you must use the special environment variables of GitLab itself.


In my case (I use Google Cloud Engine ), ports 25, 465 (i.e. standard ports of the SMTP protocol) are closed by default. One solution to this problem is to use a third-party service (such as MailGun ) as an SMTP server. To do this, use the settings:


 gitlab_rails['smtp_enable'] = true; gitlab_rails['smtp_address'] = 'smtp.mailgun.org'; gitlab_rails['smtp_port'] = 2525; gitlab_rails['smtp_authentication'] = 'plain'; gitlab_rails['smtp_enable_starttls_auto'] = true; gitlab_rails['smtp_user_name'] = 'postmaster@mg.yourdomain.com'; gitlab_rails['smtp_password'] = 'password'; gitlab_rails['smtp_domain'] = 'mg.yourdomain.com'; 

Well, finally, do not forget about --env VIRTUAL_HOST="gitlab.yourdomain.com" \ - the environment variable for the nginx-proxy itself.


That's all. After executing this instruction, Docker will launch a fully functioning container with GitLab CE.


Gitlab-ce standard update process


This is the last point that I want to separately highlight in this guide.


The process of updating GitLab with Docker is simplified to several commands:


  1. docker stop custom-gitlab - we stop the working container;


  2. docker rm custom-gitlab - remove the GitLab CE container.


    The important point: the removal of the container does not mean the deletion of data that was created during the use of the system. Therefore, you can execute this command without any fear.


  3. docker pull gitlab/gitlab-ce - the actual update of the image of the container;


  4. we execute a long command (example above), with the help of which we initially launched the container.

That's all. After completing these 4 commands, GitLab will automatically update to the latest version and run through the Docker Engine.


Results


So, as a result of running this guide, you should have a Docker cluster based on the NGINX Reverse Proxy; Each web application has its own virtual host, which at the same time supports the secure HTTPS protocol.


Together with web applications, the GitLab cluster is fully configured, up to and including access to the SMTP server.


I really hope that this small research of mine will be useful or, at least, interesting for many HabraHabr readers. Of course, I will be glad to hear criticism of professionals, additions or improvements to the article!


Thanks for attention!


')

Source: https://habr.com/ru/post/317636/


All Articles