The article is a set of simple, understandable instructions and tips for Docker users under Windows. The article will help PHP developers to quickly raise the project. Describes the problems and their solutions. The article is useful to those who do not have an infinite resource of time to delve deeply into the problems of the docker under Windows. The author would be infinitely grateful if he had previously met a similar article and the author would have saved a lot of time and effort. The text may contain errors and inaccuracies.
The article is not a complete and comprehensive guide to Docker for Windows. The article does not describe anything new and does not disclose previously unknown facts - all this you can independently find in various sources. The article also does not answer the question - did the chicken cross the road?
This is an important step; if you skip it or not, then the following instructions may not make sense at all for you. Run Windows + R, lusrmgr.msc, "Local Users and Groups" will open. Further users, the context menu, in it "New user ...". Add a new user (for example, dockerhost) with a required password. Password is required! Add membership to the Administrators group. Do not add other groups.
Next in the Docker settings, Shared drivers, select the drives you need and enter the data (login and password) of the new user. Save the settings. If you previously entered the username and password of the user under which you work, then enter the new user data. If you want problems in an unexpected place, use a work account and do not read the article.
On the Internet there are all kinds of docker-compose.yml configs for PHP webserver. Some of them do not run under Windows. I recommend watching the docker-compose.yml generator . I got the config from the generator right away and therefore I further edited the config from there. The final result posted in github . The variant from the generator is bad with several things. A description of the problem and its solutions are given below.
Problem: unable to store database files on a local disk. This should be taken as an axiom under Windows and try to find an acceptable solution so that the data is stored outside the container. For Windows, this is named volume. Just a couple of lines solves this problem.
postgres: volumes: - db:/var/lib/postgresql/data volumes: db:
Named volume are located in the / var / lib / docker / volumes / folder of the MobiLinuxVM docker virtual machine. There is no direct access to these files, only through the intermediary container. I don’t know how to make this folder accessible from under Windows. To manage named volumes, use the docker volume command. For example, delete the unused docker volume prune volumes.
You do not need to be steamed with the rights and users for the files in a named volume - the docker does everything for you, for which he thanks a lot. In some manuals, on setting up a permanent database storage in the docker, dances with assignment of rights are given. Under Windows without a named volume, nothing will come of it. Well, the first time may start, but when restarted, it will be bent. You can’t even imagine what a relief it was when a permanent storage with named volume started working.
SSh keys also add through named volume. The reason is that private keys need special rights, and this capacity is not provided by regular volumes under Windows. I will quote the required piece from docker-compose.yml
services: php: volumes: - ssh:/root/.ssh volumes: ssh:
For this to work, you must copy the keys in the named volume, change the rights to private keys, test the connection. All these actions can be arranged as a single bat file.
docker run -v first_ssh:/root/.ssh --name helper busybox true docker cp ./.ssh/. helper:/root/.ssh/ docker rm helper docker-compose exec php ls /root/.ssh docker-compose exec php chmod 600 /root/.ssh/id_rsa docker-compose exec php ssh git@github.com
In the script, of course, instead of first_ssh, substitute your own volume name. Let me remind you that the name named volume will be formed by the docker as COMPOSE_PROJECT NAME + + ssh in our case. Instead of ./.ssh put the path to your keys (well, or temporarily copy the folder with the keys where you run this script). If you added your key to github, then at the very end github will greet. The script is one-time and should be run immediately after the successful first start of your containers (docker-compose up -d). Repeated runs do not make any sense, unless you deleted a named volume.
To connect between the containers described in the same docker-compose.yml file, nothing is unnecessary. It is enough to specify the name of the service or the name of the container as the host name. Port numbers do not need to be changed - the default ports work. At this point, one could finish if it were not required to receive and send requests to the containers from another docker-compose.yml. With this, too, no problems, it is enough to specify the name of the networks and containers in the network. I will quote the desired area docker-compose.yml
services: php: networks: # this network - default # external network - second_default external_links: - ${EXTERNAL_NGINX} networks: default: driver: bridge second_default: external: true
Note that the network name second_default is not placed in environment variables because the global networks section does not allow using variables in principle. Then it makes no sense to use variables in the php section, where possible. for example, place the container name from the external network there. If in the .env file we specify EXTERNAL_NGINX = second_nginx, then in PHP code it will suffice to use the host name second_nginx to make an http request. The port is still 80 and it is not necessary to prescribe it specifically. In the github repository, I added scripts to check the connection between containers from different docker-compose.yml. It is enough to run docker-compose exec php php get.php to make sure it works.
When the containers are first launched, the docker may scold. ERROR: The network has not been found yet. Please create the docker network create second_default
and try again. Actually, that's right, use the help of the docker and manually create a network.
I would like to have logs from all services in one folder accessible locally. This is done easily with ordinary volumes. The main thing is to enable logging in the service itself. For php, these are options in the php.ini file, in nginx — in its config, in postgres, too. With php and nginx everything is simple - there are corresponding files in our config. For postgres, you will have to use the command line options (there is also a path through the postgresql.conf file, but it will be a bit more complicated)
services: postgres: command: postgres -c logging_collector=on -c log_destination=stderr -c log_directory=/logs -c client_min_messages=notice -c log_min_messages=warning -c log_min_error_statement=warning -c log_min_duration_statement=0 -c log_statement=all -c log_error_verbosity=default volumes: - ${LOGS_DIR}:/logs
The full text of docker-compose.yml in turnips and at the end of the article. I must say that I am still not satisfied with the log settings. While satisfied as it is. Those who wish to fine-tune logging can use the documentation of the relevant services.
A great option to make docker-compose.yml an almost universal config is the .env file. Full versatility does not work because of the global networks section, where it is impossible to specify environment variables. Of the features of my file, these are variables for postgress that do not need to be written in docker-compose.yml itself. In php to connect to the database, you can use
'dsn' => sprintf('pgsql:host=%s;dbname=%s', getenv('POSTGRES_HOST'), getenv('POSTGRES_DB')),
If you noticed, then I used my own images for php and nginx. It saved me time testing. Nothing prevents you from using other container images - my config is just a demo. It's easy to build your images - look in the turnips folder for the build, where the used images are created.
I will give here the final version of docker-compose.yml if to someone it is too lazy to go into rap:
version: "3.5" services: php: image: litepubl/php70:latest container_name: ${FPM_CONTAINER_NAME} env_file: .env working_dir: /var/www/html volumes: - ..:/var/www/html - ./php/php-ini-overrides.ini:/etc/php/7.2/fpm/conf.d/99-overrides.ini - ${LOGS_DIR}:/logs - ssh:/root/.ssh depends_on: - postgres - mongo networks: # this network - default # external network - second_default external_links: - ${EXTERNAL_NGINX} postgres: image: postgres:9.5 container_name: ${POSTGRES_CONTAINER_NAME} env_file: .env ports: - ${POSTGRES_EXT_PORT}:5432 working_dir: /var/www/html command: postgres -c logging_collector=on -c log_destination=stderr -c log_directory=/logs -c client_min_messages=notice -c log_min_messages=warning -c log_min_error_statement=warning -c log_min_duration_statement=0 -c log_statement=all -c log_error_verbosity=default volumes: - ..:/var/www/html - db:/var/lib/postgresql/data - ${LOGS_DIR}:/logs mongo: image: mongo:latest container_name: ${MONGO_CONTAINER_NAME} ports: - ${MONGO_EXTERNAL_PORT}:27017 volumes: - mongo:/data/db webserver: image: litepubl/nginx container_name: ${NGINX_CONTAINER_NAME} working_dir: /var/www/html volumes: - ..:/var/www/html - ${LOGS_DIR}:/var/log/nginx/ - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf ports: - ${NGINX_EXT_PORT}:80 depends_on: - php volumes: ssh: db: mongo: networks: default: driver: bridge second_default: external: true
As you can see, nothing complicated and everything works. I will also give a few useful commands that I designed in the form of bat files. Running codeception tests
del tests\_output\*.* /f /q del tests\_output\debug\*.* /f /q del logs\debug.log cd docker @cls docker-compose exec php bash test.sh cd ..
and test.sh myself
vendor/bin/codecept run unit --steps --html --debug>testlog.txt
After this article, the arisen questions are able to solve the official documentation on the docker or other components used. That's all, good luck with your development.
Source: https://habr.com/ru/post/348668/
All Articles