📜 ⬆️ ⬇️

Flask Mega-Tutorial, Part XIX: Deploying Based on Docker Containers

(edition 2018)


Miguel grinberg




There


This is the nineteenth part of the Flask mega-tutorial series in which I am going to deploy Microblog on the Docker platform.


Under the spoiler is a list of all articles in this 2018 series.



Note 1: If you are looking for old versions of this course, this is here .


Note 2: If suddenly you would like to speak in support of my (Miguel) work, or simply do not have the patience to wait for the article for a week, I (Miguel Greenberg) offer the full version of this manual (in English) in the form of an electronic book or video. For more information, visit learn.miguelgrinberg.com .


In Chapter 17, you learned about traditional deployments, in which you should take care of every little aspect of server configuration. Then, in Chapter 18, I brought you to another extreme when I introduced you to Heroku, a service that completely controls the configuration and deployment tasks. In this chapter, it’s time to learn about the deployment strategy for third-party container-based applications, particularly on the Docker container platform. This third option is the "golden mean" between the other two.


Containers are built on the basis of a simplified virtualization technology, which allows an application, along with its dependencies and configuration, to work in complete isolation, but without the need for a full-featured virtualization solution, such as virtual machines, that require significantly more resources and can sometimes have a significant decrease in performance. compared to the host. A system configured as a container node can run multiple containers, all of which share the host core and have direct access to the host hardware. This is in contrast to virtual machines, which must emulate a complete system, including a processor, disk, other hardware, a kernel, etc.


Although we do not share the core, the isolation level in the container is quite high. The container has its own file system and may be based on an operating system other than that used by the container node. For example, you can run Ubuntu Linux-based containers on a Fedora site or vice versa. Despite the fact that containers are a technology that is native to the Linux operating system, thanks to virtualization, you can run Linux containers on Windows and Mac OS X hosts. This allows you to test deployments in the development system, as well as include containers in the development workflow, if you need it.


GitHub links for this chapter: Browse , Zip , Diff .


Install Docker CE


Although Docker is not the only container platform, it is by far the most popular, so this will be my choice. There are two Docker releases, the Free community edition (CE) edition and the Subscription-based Enterprise edition (EE) edition. For the purposes of this tutorial, Docker CE is quite adequate.


To work with Docker CE, you first need to install it on your system. There are installers for Windows, Mac OS X, and several Linux distributions available on the Docker website . If you are running Microsoft Windows, it is important to note that Docker CE requires Hyper-V. The installer will enable it for you if necessary, but keep in mind that turning on Hyper-V prevents other virtualization technologies from working, such as VirtualBox.


After installing Docker CE in the system, you can verify the success of the installation by entering the following command in a terminal window or command line:


$ docker version Client: Version: 17.09.0-ce API version: 1.32 Go version: go1.8.3 Git commit: afdb6d4 Built: Tue Sep 26 22:40:09 2017 OS/Arch: darwin/amd64 Server: Version: 17.09.0-ce API version: 1.32 (minimum version 1.12) Go version: go1.8.3 Git commit: afdb6d4 Built: Tue Sep 26 22:45:38 2017 OS/Arch: linux/amd64 Experimental: true 

Build container image


The first step in creating a microblog container is to create an image for it. A container image is a template that is used to create a container. It contains a complete view of the container file system, as well as various network-related settings, launch options, etc.


The easiest way to create a container image for an application is to run the container for the base operating system you want to use (Ubuntu, Fedora, etc.)., Connect to the bash shell process running in it, and then manually install the application, perhaps following the recommendations I presented in chapter 17 for traditional deployment. After installing everything you can take a snapshot of the container, which will become the image. This type of workflow is supported by the docker team, but I’m not going to discuss it because it’s not convenient to manually install the application every time you need to create a new image.


The best approach is to create a container image using a script. The team that creates container images with scripts is docker build . This command reads and executes build instructions from the Dockerfile file, which I have yet to create. Dockerfile is actually an installation script that performs installation steps to deploy an application, as well as some container parameters.


Here is the basic Dockerfile for Microblog:


Dockerfile: Dockerfile for Microblog.

 FROM python:3.6-alpine RUN adduser -D microblog WORKDIR /home/microblog COPY requirements.txt requirements.txt RUN python -m venv venv RUN venv/bin/pip install -r requirements.txt RUN venv/bin/pip install gunicorn COPY app app COPY migrations migrations COPY microblog.py config.py boot.sh ./ RUN chmod +x boot.sh ENV FLASK_APP microblog.py RUN chown -R microblog:microblog ./ USER microblog EXPOSE 5000 ENTRYPOINT ["./boot.sh"] 

Each line in the Dockerfile is a command. The FROM command specifies the base image of the container on which the new image will be built. The idea is that you start with an existing image, add or change some features, and ultimately get a derived image. The image is referenced by name and tag, separated by a colon. The tag is used as a version control mechanism, allowing the container image to provide several options. The name of the selected image is python , which is the official Docker image for Python. The tags for this image allow you to specify the version of the interpreter and the base operating system. The 3.6-alpine tag selects the Python 3.6 interpreter installed in Alpine Linux. The Alpine Linux distribution is often used in place of more popular ones, such as Ubuntu due to its small size. You can see which tags are available for the Python image in the Python image repository .


The RUN command executes an arbitrary command in the context of the container. This is similar to typing a command in the shell command line. The adduser-D microblog command creates a new user called microblog . Most container images have root as the default user, but it is not recommended to run the application as root, so I create my own user.


The WORKDIR command sets the default directory where the application will be installed. When I created the microblog user above, the home directory was created, so now I make this directory the default. The new default directory will be applied to all remaining commands in the Dockerfile, as well as later when the container is executed.


The COPY transfers files from the computer to the file system of the container. This command takes two or more arguments, source and destination files or directories. The source file (s) must be relative to the directory in which the Dockerfile is located. The assignment can be an absolute path or a path relative to the directory specified in the previous WORKDIR . In this first copy command, I copy the requirements.txt file to the home directory of the microblog user in the container file system.


Now I have the requirements.txt file in the container and I can create a virtual environment using the RUN command. First I create it, and then I set all the requirements. Since the requirements file contains only general dependencies, I then explicitly install gunicorn , which I am going to use as a web server. Alternatively, I could add gunicorn to my files in requirements.txt .


Three COPY commands that follow the installation of the application into the container, copying the app package, the migrations directory with database migrations, and the microblog.py and config.py scripts from the top-level directory. In addition, I copy the new boot.sh file, which I will discuss below.


The RUN chmod ensures that this unknown hitherto unknown boot.sh file is installed correctly, like an executable file. If you are in a Unix-based file system, and your source file already has the sign of an executable file, then the executable bit will also be set in the copied file. I added an explicit install, because on Windows it’s harder to assign executable bits. If you are running Mac OS X or Linux, this is probably not necessary, but it doesn’t hurt anyway.


The ENV command sets the environment variable inside the container. I need to install FLASK_APP , which is required to use flask .


The following command RUN chown sets the owner of all directories and files that were stored in / home / microblog , as a new microblog user. Despite the fact that I created this user in the root Dockerfile, the default User remained root for all commands, so all these files must be switched to the microblog user so that this user can work with them when the container is started.


The USER command in the next line makes this new microblog user the default for any subsequent instructions, as well as when the container is started.


The EXPOSE command sets the port for its server. This is necessary so that Docker can properly configure the Network in the container. I chose the standard port 5000, but it can be any port.


Finally, the ENTRYPOINT command determines the default for the command to be executed when the container is started. This is the command that will launch the web application server. For everything to be well organized, I decided to create a separate script for this, and this is the boot.sh file that I copied into the container earlier. Here are the contents of this script:


boot.sh: Docker container start-up script.

 #!/bin/sh source venv/bin/activate flask db upgrade flask translate compile exec gunicorn -b :5000 --access-logfile - --error-logfile - microblog:app 

This is a fairly standard startup script, which is quite similar to how the deployments were launched in Chapter 17 and Chapter 18 . I activate the virtual environment, update the database through the migration platform, compile translations of languages, and finally start the server with gunicorn .


Notice the exec that precedes the gunicorn command. In a shell script, exec starts a process that executes a script that needs to be replaced by a given command, instead of starting it as a new process. This is important because Docker connects the life of the container with the first process that runs on it. In cases like this, when the startup process is not the main container process, you need to make sure that the main process takes the place of this first process to make sure that the container does not complete early Docker.


An interesting aspect of Docker is that all container output that is written to stdout or stderr will be captured and saved as container logs. For this reason, the --access-logfile and --error-logfile have a - at the end, which sends the log to standard output and stores it as a Docker log.


With the help of the created Dockerfile, you can now create an image of the container:


 $ docker build -t microblog:latest . 

The -t argument passed to the docker build sets the name and tag for the new container image. . specifies the base directory in which the container should be created. This is the directory where the Docker file is located. The build process will evaluate all the commands in the Docker file and create an image that will be stored on your own machine.


You can get a list of images that you use locally using the docker images :


 $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE microblog latest 54a47d0c27cf About a minute ago 216MB python 3.6-alpine a6beab4fa70b 3 months ago 88.7MB 

This list includes your new image, as well as the base on which it was built. When making changes to the application, you can update the image of the container by running the build command again.


Container run


With the already created image, you can now launch the version of the application container. This is done using the docker run , which usually takes a large number of arguments. First, I'll show you a simple example:


 $ docker run --name microblog -d -p 8000:5000 --rm microblog:latest 021da2e1e0d390320248abf97dfbbe7b27c70fefed113d5a41bb67a68522e91c 

The --name parameter provides the name for the new container. The -d tells Docker to start the container in the background. Without the -d switch, the container runs as a foreground application, blocking the command line. The -p maps container ports to host ports. The first port is the port on the host computer, and the port on the right is the port inside the container. The above example provides port 5000 in a container on port 8000 in a node, so you will access the application on 8000, even though the internal container uses 5000. The --rm parameter removes the container after it completes. Although this is not required, containers that end or break are usually no longer needed, so they can be automatically removed. The final argument is the name of the container image and the tag used for the container. After executing the above command, you can access the application at http: // localhost: 8000 .


The result of the docker run is the identifier assigned to the new container. This is a long hexadecimal string that you can use when you need to reference the container in subsequent commands. In fact, only the first few characters are needed to make the identifier unique.


If you want to see which containers are running, you can use the docker ps :


 $ docker ps CONTAINER ID IMAGE COMMAND PORTS NAMES 021da2e1e0d3 microblog:latest "./boot.sh" 0.0.0.0:8000->5000/tcp microblog 

As you can see, even the docker ps shortens the ids of the containers. If you now need to stop the container, use docker stop :


 $ docker stop 021da2e1e0d3 021da2e1e0d3 

Let me remind you that, in the application configuration, there are several parameters that are derived from environment variables. For example, the parameters of the Flask secret key, the database URL and the email server are imported from the environment variables. In the docker run example above, I didn’t worry about them, so all these configuration parameters will use the default values.


In a more realistic example, these environment variables will be set inside the container. You saw in the previous section that setting the environment variables in the Dockerfile is done by the ENV command, and this is a handy option for variables that will be static. However, for installation-dependent variables, it is not possible to use them as part of the build process, because I would like to have an image of a container that is easy to carry. If you want to transfer your application to another user as an image of a container, you want this person to use it as is, and not rebuild it using other variables.


Thus, the build-time environment variables can be useful, but you also need to have run-time environment variables, which can be set using the docker run , and the -e key is used for these variables. The following example sets the secret-key and email for a Gmail account:


 $ docker run --name microblog -d -p 8000:5000 --rm -e SECRET_KEY=my-secret-key \ -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \ -e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \ microblog:latest 

Such a length for a docker run on the command line is not uncommon due to the presence of multiple definitions of environment variables.


Use of third-party "container" services


The container version of Microblog looks good, but I haven't thought much about storage yet. The problem is that I did not set the DATABASE_URL environment variable, and the application uses the default SQLite database that is supported by the disk file. What do you think will happen to this SQLite file when you stop and remove the container? The file will disappear!


The file system in the container is ephemeral, that is, it disappears along with the stopping of the container. You can write data to the file system, and the data will be available if the container needs to read it, but if for any reason you need to recycle the container and replace it with a new one, any data saved by the application to disk will be lost forever.


A good design strategy for a container application is to create stateless application containers. If you have a container that has application code and no data, you can throw it away and replace it with a new one without any problems, the container becomes really one-time, which is a great achievement in terms of simplifying the deployment of updates.


But, of course, this means that the data must be placed somewhere outside the application container. This is where the fantastic Docker ecosystem comes into play. Docker Container Registry contains a wide variety of container images. I have already told you about the image of the Python container, which I use as the base image for my Microblog container. In addition, Docker supports images for many other languages, databases, and other services in the Docker registry, and if this is not enough, the registry also allows companies to publish container images for their products, as well as regular users, such as you or me, to publish their own images. This means that efforts to install third-party services are reduced to finding the corresponding image in the registry and launching it using the docker run with the appropriate arguments.


So now I’ll create two additional containers: one for the MySQL database and one for the Elasticsearch service, and then I’ll execute a long command line that will launch the Microblog container with a bunch of parameters that will allow me access to these two new containers.


Adding MySQL container


Like many other products and services, MySQL has publicly available container images available in the Docker registry. Like my own Microblog container, MySQL relies on environment variables to be transferred to the docker run . These are settings such as password, database name, etc. Despite the fact that there are many MySQL images in the registry, I decided to use the one that is officially supported by the MySQL team. You can find detailed information about the MySQL container image on its registry page: https://hub.docker.com/r/mysql/mysql-server/ .


If you remember the time-consuming MySQL setup process in chapter 17 , you will appreciate Docker, seeing how easy it is to deploy MySQL. Here is the docker run that starts the MySQL server:


 $ docker run --name mysql -d -e MYSQL_RANDOM_ROOT_PASSWORD=yes \ -e MYSQL_DATABASE=microblog -e MYSQL_USER=microblog \ -e MYSQL_PASSWORD=<database-password> \ mysql/mysql-server:5.7 

That's all! On any machine on which Docker is installed, you can run the above command and get a fully installed MySQL server with a randomly generated root password, a completely new database called microblog and a user with the same name and full database access settings. Please note that you will need to enter the correct password as the value for the MYSQL_PASSWORD environment MYSQL_PASSWORD .


Now, on the application side, as for traditional deployments on Ubuntu, I need to add a MySQL client package. I will use pymysql , which I can add to the Dockerfile :


Dockerfile: Add pymysql to Dockerfile.

 # ... RUN venv/bin/pip install gunicorn pymysql # ... 

Each time the application or Dockerfile file changes, the container image must be rebuilt:


 $ docker build -t microblog:latest . 

Now I can start Microblog again, but this time with a link to the database container, so that both can communicate via the network:


 $ docker run --name microblog -d -p 8000:5000 --rm -e SECRET_KEY=my-secret-key \ -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \ -e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \ --link mysql:dbserver \ -e DATABASE_URL=mysql+pymysql://microblog:<database-password>@dbserver/microblog \ microblog:latest 

The --link parameter tells Docker to make another container available. The argument contains two names separated by a colon. The first part is the name or identifier of the link container, in this case mysql , which I created above. The second part defines the host name that can be used in this container for links (links). Here I use dbserver as the universal name that the database server represents.


When the connection between the two containers is established, I can set the DATABASE_URL environment variable to direct SQLAlchemy to use the MySQL database in another container. The database URL will use dbserver — as the host name of the database, microblog the database and user names, and the you chose when starting MySQL.


I noticed one feature when I experimented with the MySQL container, which is that it takes a few seconds to launch this container until it is ready to accept connections to the database. If you run the MySQL container and the application container right after the boot.sh script attempt to start the flask db migrate may fail because the database is not ready to accept connections. To make my solution more reliable, I decided to add a repeat loop in boot.sh :


boot.sh : retry to connect to database.

 #!/bin/sh source venv/bin/activate while true; do flask db upgrade if [[ "$?" == "0" ]]; then break fi echo Upgrade command failed, retrying in 5 secs... sleep 5 done flask translate compile exec gunicorn -b :5000 --access-logfile - --error-logfile - microblog:app 

This cycle checks the exit code of the flask db upgrade command, and if it is not zero, it means that something went wrong, so it pauses for five seconds and then retries.


Adding Elasticsearch Container


Elasticsearch Docker , (two-node). docker single-node "oss", . :


 $ docker run --name elasticsearch -d -p 9200:9200 -p 9300:9300 --rm \ -e "discovery.type=single-node" \ docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.1 

docker run , Microblog MySQL, . -, -p , , . 9200 9300 -.


. , , <name>:<tag> . MySQL <account>/<name>:<tag> , Docker. Elasticsearch, , <registry>/<account><name>:<tag> , . , Docker. Elasticsearch docker.elastic.co , Docker.


, Elasticsearch, Microblog, URL Elasticsearch:


 $ docker run --name microblog -d -p 8000:5000 --rm -e SECRET_KEY=my-secret-key \ -e MAIL_SERVER=smtp.googlemail.com -e MAIL_PORT=587 -e MAIL_USE_TLS=true \ -e MAIL_USERNAME=<your-gmail-username> -e MAIL_PASSWORD=<your-gmail-password> \ --link mysql:dbserver \ -e DATABASE_URL=mysql+pymysql://microblog:<database-password>@dbserver/microblog \ --link elasticsearch:elasticsearch \ -e ELASTICSEARCH_URL=http://elasticsearch:9200 \ microblog:latest 

, . , , Elasticsearch .


http://localhost:8000 . , . , , Python:


 $ docker logs microblog 

Docker


, , Docker, , . , Docker, .


Docker, https://hub.docker.com . , , , , .


docker login :


 $ docker login 

, microblog:latest , . Docker, , , MySQL. docker tag :


 $ docker tag microblog:latest <your-docker-registry-account>/microblog:latest 

docker images , , microblog:latest , . .


Docker, docker push :


 $ docker push <your-docker-registry-account>/microblog:latest 

, , Docker , MySQL .



Docker , , Docker. , , 17 Digital Ocean, Linode Amazon Lightsail. Docker .


Amazon Container Service (ECS) , , AWS , .


, , Kubernetes , , YAML , , .



')

Source: https://habr.com/ru/post/353234/


All Articles