📜 ⬆️ ⬇️

Docker Basics in X hours and Y days

0. Introduction


The purpose of this article is to collect a small handful of basic information, minimally enough to start working with the docker on a daily basis and remove locally installed apache, mysql, virtualenv, python3, mongodb, memchaced, redis, php5, php7 and the rest from the working machine. Zoo, which we use in the development, and which often also conflicts with each other from version to version.

And I'm on the bus and the next 7 hours I still have nothing to do. Well, in addition, I will finally gather in one place links and commands, for which I myself periodically have to go into the documentation, for example - how to add an alias IP to a local host on a poppy: sudo ifconfig lo0 alias 10.200.10.1/24 (why would it be necessary said later)

The article is called as it is called, because I could not more or less accurately calculate how much time is needed for the task. It took me about 6 hours in 3 days. In many ways, because at that moment I worked full time in the office and mastered the docker, so to speak, on the job.

It can take you less days or more hours, depending on how hard you go.
')
But my personal opinion is that it is better not to try to “rush in” in one day. Just because if you have never dealt with it, then at the end of the first day you will have such a head somewhere.

image

And it is better to stop at this moment, digest information, maybe even sleep.

1. Theory


If you have previously dealt with virtual machines and tools such as virtualbox, vmware, vagrant and similar things, better forget about them.
Personally, my mistake was trying to work with a docker like a virtual machine. Docker is a means of virtualizing processes, not systems. An important rule is for each process its own virtual container.

The container should be perceived as a separate process and vice versa. For example, you should not push into one container mysql and redis. Or worse, the whole bunch of apache + php + mysql.

Basic terms


Image (image) - the assembled subsystem necessary for the operation of the process, stored in the image.
Container (container) - the process, initialized on the basis of the image. That is, the container only exists when running. It is like an instance of a class, and an image is a type of class. Well, I think the idea is clear.
Host (host) - the environment in which the docker runs. Simply put - your local machine.
Volume is the disk space between the host and the container. Easier - this is a folder on your local machine mounted inside the container. Change here changes there, and vice versa, miracle.
Dockerfile - a file with a set of instructions for creating an image of the future container
Service is essentially a running image (one or more containers), additionally configured with options such as opening ports, folder mapping (volume), and so on. This is usually done using the docker-compose.yml file.
Docker-compose (docker-composite, more often a composer, but not to be confused with php composer) is a tool that facilitates the assembly and launch of a system consisting of several containers connected to each other.
Build (build, build) - the process of creating an image from a set of instructions in a doc-file, or several doc files, if the build is done using a composer
In this article later (tomorrow) I will describe the process of building a bunch of nginx + mysql + php7-fpm with examples and descriptions of dockerfile and docker-compose files.

Briefly about how image building works


First there is the docker hub, this is such a repository where everyone who wants to assert themselves publishes their own image builds. For this many thanks a lot, but some don't, everything is just like in any other dump of packages.

Usually the dockerfile begins with the FROM instruction, which indicates from which package / image from the hub to start.

Next is usually the maintainer instruction, the task of which is to perpetuate the name of the creator of the next brilliant creation.

Then the most common commands are:
RUN - executes a command inside the image.
ADD - takes files from the host and puts it inside the image.
And also COPY, EXPOSE, ENTRYPOINT, CMD - you will learn about all this in the process.

Now attention. The docker executes instructions from the dockerfile sequentially over the previous result. Thus, cache storage is organized.
Not understood? I'll show you now. Here is the simplest file file:

 FROM ubuntu:latest MAINTAINER igor RUN apt-get update RUN apt-get install nginx ADD ./nginx.conf /etc/nginx/ EXPOSE 80 CMD [nginx] 

How docker his build:

1. download the ubuntu image with the latest tag, save it with ID = aaa
2. take the image of aaa, set it to maintainer = igor, save it with ID = aab
3. take the aab image, launch the container and execute the “apt-get update” command inside, stop the container, the resulting image is saved with ID = aac
4. take an aaC image, launch the container and execute the “apt-get install nginx” command inside, stop the container, the resulting image is saved with ID = aad
5. take the aad image, run the container and copy the ./nginx.conf file (the path is relative to the folder in which the dockerfile is located) inside the container along the path / etc / nginx /, stop the container, the resulting image is saved with ID = aae
...

already understandable?

The ID I wrote here is conditional, but it is important to remember that the identifiers of these “intermediate” images are directly related to the instructions themselves, to the files that are added by the ADD instruction and to the ID of the parent image. That is, in fact, before each step is executed, the ID (hash) of the image is first calculated, the search for such an ID in the local cache, and only if there is no such ID in the cache, then the step is executed and stored in the cache, otherwise the image from the cache is used.

And it also means that if you decide to change a command such as run apt-get install nginx to another, then the hash (ID) of the instruction will change and all further cache will not be used after that. Therefore, do not be surprised if, after changing one letter in the name of the maintainer, your entire assembly will be rebuilt from the very beginning.

Also, based on the described command execution script, it becomes clear why there is no point in the instructions to execute commands that do not save anything after my execution - a frequent question on stackoverflow - “I ran something, but in the next instruction it is not running”. For example, someone wants to activate source env / bin / activate and in the next instruction execute pip install

 RUN source /app/env/bin/activate RUN pip install something 

or another example - run mongodb and in the next instruction create a user / database or import the database from a file (there are reasons why it is better not to do this, but this is not about that now)

RUN service mongodb start
RUN mongo db --eval 'db.createUser({user:"dbuser",pwd:"dbpass",roles:["readWrite","dbAdmin"]})'

As you can see from the build process, each next container does not know anything about what was started there in the previous step, so these instructions should be combined into one through &&:

RUN source /app/env/bin/activate \
&& pip install something

RUN service mongodb start \
&& RUN mongo db --eval 'db.createUser({user:"",...})'

but in general, due to the fact that the containers are isolated, I do not see much sense in using inside tools like nvm, virtualenv, rbenv and similar stuff. Just put what you need and everything.

I think to begin the work of this theory is enough.

2. Practice


Before starting, I think I should go take a little rest and make myself a seagull.
And when you return, first install Docker and Docker Compose .

A small digression for those who read this under Windows.

Are you seriously?

No, it is of course possible and there are installation files and instructions on the site. And personally, I have nothing against windows when it comes to home use as a media station. But if you are developing under Windows, then I simultaneously admire you and condolences. Frankly, I did not raise the docker under Windows, only under ubuntu and under poppy. However, I heard the groans of a colleague from a nearby office who tried, and he even managed to run the assembly, but when it came to the symlinks inside the volume, everything was ruined. Therefore, if you got here in the search for recipes, how to work with Docker under Windows, then I have bad news, here you will not find them.

So, at this stage, you already have a docker installed, and in the taskbar, you happily gurgle in blocks of # bluekit (sorry, I could not resist). Now go here on this link and go through the Get Started tutorial.

Now let's imagine that we are developing a web site in php and will publish it with a bunch of nginx + php7-fpm + mysql.

Here is a very primitive dockerfile for the php service:

FROM php:7-fpm
# Install modules
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg62-turbo-dev \
libmcrypt-dev \
libpng12-dev \
libicu-dev \
--no-install-recommends
RUN docker-php-ext-install mcrypt zip intl mbstring pdo_mysql exif \
&& docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install gd

ADD ./php.ini /usr/local/etc/php/
ADD ./www.conf /usr/local/etc/php/

RUN apt-get purge -y g++ \
&& apt-get autoremove -y \
&& rm -r /var/lib/apt/lists/* \
&& rm -rf /tmp/*

EXPOSE 9000
CMD ["php-fpm"]

In short human language:


With php figured out, now we need to mark the images for the services of nginx and mysql, and also to collect all the services into a complete system.

In the case of nginx and mysql, we don’t even need to write our dockerfile, since we don’t need to install any additional extensions. Here’s what our project’s docker-compose.yml will look like

app:
build: docker/php/Dockerfile
working_dir: /app
volumes:
- ./:/app
expose:
- 9000
links:
- db
nginx:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./:/app
- ./docker/nginx/vhost.conf:/etc/nginx/conf.d/vhost.conf
links:
- app
db:
image: mysql:5.7
volumes:
- /var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: dbname
MYSQL_USER: dbuser
MYSQL_PASSWORD: dbpassword

Here the services app, nginx, db are declared. The app is assembled from our dockerfile, and the rest simply use images from the hub.

The volumes directive mounts the folders from the host machine inside the container, thus configuring nginx and saving the database data on restart.
The links directive binds the services to each other, the app is connected to the db, which means that after running inside the app container, the “db” host will be available and it will point to the corresponding container.

It's simple (irony).

There is a rather interesting template yii2-starter-kit , in the box of which you can find a good implementation of the described assembly php7-fpm nginx mysql as well as mailcatcher.

Those who, like me, prefer python and django, you can not even bathe and do everything on the official tutorial from Docker - docs.docker.com/compose/django
plus, after you understand how it all works, it will not be difficult to rework any assembly you like to fit your needs.

Pitfalls


- MacOS. Access to the service on the host (for example mongo or mysql) from the container.
Due to the limitations of the “Docker for Mac networking stack” you cannot “just pick up and connect” to a local host. But there are two workarounds:

a) official and simple (available in the Docker version starting from 17.06) - use a special DNS host to connect (available only on Docker for Mac) docker.for.mac.localhost. Source of

b) add an alias IP to the network device lo0:
`sudo ifconfig lo0 alias 10.200.10.1 / 24`
and use this address to connect

- MongoDB. You cannot mount an external drive for data on a Mac. The reasons are described here.
WARNING (Windows & OS X): The default Docker setup on Windows and OS X. Unfortunately, there is no need for folders.

- Under Windows , symlinks do not work in mounted volumes. It is expected, but it is very unpleasant to find out about this after everything else has worked.

- Differences between entrypoint and command - here the difference between entrypoint and command is described in detail and clearly .

- UPD addition from saskasa : On the Mac , the recording speed from the container to the host disk (added as VOLUME) is very slow, for understanding the scale - about 50-100 times.

Source: https://habr.com/ru/post/337306/


All Articles