πŸ“œ ⬆️ ⬇️

Docker + Laravel = ❀

laravel-in-docker


In this article, I will talk about my experience in wrapping up a Laravel application in a Docker container, so that the frontend and backend developers could work with it locally, and launching it in production was as simple as possible. Also, CI will automatically run static code analyzers, phpunit tests, and build images.


"And what is the difficulty?" - you can say, and you will be partly right. The fact is that quite a lot of discussion has been devoted to this topic in the Russian-speaking and English-speaking communities, and I would conditionally divide almost all the studied threads into the following categories:



Everything that you read below is a subjective experience that does not claim to be the ultimate truth. If you have additions or indications of inaccuracy - welcome to comments.


For the impatient - a link to the repository , which you can incline to run the Laravel application with one command. It is also not difficult to run it on the same rancher , properly linking the containers, or using the docker-compose.yml product version as a starting point.

Part theoretical


What tools will we use in our work, and what will we do with accents? First of all, we will need the ones installed on the host:



You can curl -fsSL get.docker.com | sudo sh docker on debian like systems with the command curl -fsSL get.docker.com | sudo sh curl -fsSL get.docker.com | sudo sh , but docker-compose better put with the help of pip , since the most recent versions live in its repositories ( apt far behind, as a rule).

This list of dependencies can be completed. What you will use to work with source codes - phpstorm , netbeans or vim vandal - only you decide.


Next is an improvised QA in the context (I’m not afraid of the word) of designing images:



Having decided on the main approaches let's move on to our application. It should be able to:



The basic set, which, if necessary, can be expanded. Now let's move on to the images that we have to collect in order for our application to β€œtake off” (their code names are given in brackets):



The rest of the development services are launched in containers, pulling them from hub.docker.com ; in production, they are running on separate servers, clustered together. All that remains for us is to tell the application (via the environment) at which addresses / ports and with what details it is necessary to knock them. Even cooler is to use service-discovery for these purposes, but this is not about this time.


Having defined the part of the theoretical part, I suggest moving on to the next part.


Part practical


I propose to organize the files in the repository as follows:


 . β”œβ”€β”€ docker #    -   β”‚  β”œβ”€β”€ app β”‚  β”‚  β”œβ”€β”€ Dockerfile β”‚  β”‚  └── ... β”‚  β”œβ”€β”€ nginx β”‚  β”‚  β”œβ”€β”€ Dockerfile β”‚  β”‚  └── ... β”‚  └── sources β”‚    β”œβ”€β”€ Dockerfile β”‚    └── ... β”œβ”€β”€ src #   β”‚ β”œβ”€β”€ app β”‚ β”œβ”€β”€ bootstrap β”‚ β”œβ”€β”€ config β”‚ β”œβ”€β”€ artisan β”‚ └── ... β”œβ”€β”€ docker-compose.yml # Compose-    β”œβ”€β”€ Makefile β”œβ”€β”€ CHANGELOG.md └── README.md 

You can view the structure and files by clicking on this link .

To build a service, you can use the command:


 $ docker build \ --tag %local_image_name% \ -f ./docker/%service_directory%/Dockerfile ./docker/%service_directory% 

The only difference is the build of the image with the source code - it needs the build context (the last argument) to be equal to ./src .


I recommend using the rules for naming images in the local registry that use docker-compose by default, namely: %root_directory_name%_%service_name% . If the project directory is called my-awesome-project , and the service is called redis , then the name of the image (local) is better to choose my-awesome-project_redis respectively.


To speed up the build process, you can tell the docker to use the cache of the previously compiled image, and the launch --cache-from %full_registry_name% used for this. Thus, the docker daemon will look at the start of a particular instruction in the Dockerfile - has it changed? And if not (the hash will converge) - he will skip the instruction, using the already prepared layer from the image, which you tell it to use as a cache. This thing is not so bad that it will rebuild the process, especially if nothing has changed :)

Pay attention to ENTRYPOINT application container launch scripts.

The image of the environment for launching an application (app) was collected taking into account the fact that it will work not only in production, but also locally, developers need to interact with it effectively. Installing and removing composer dependencies, running unit tests, tail logs and using familiar aliases ( php /app/artisan β†’ art , composer β†’ c ) should be without any discomfort. Moreover, it will also be used to run unit tests and static code analyzers ( phpstan in our case) on CI. That is why its Dockerfile, for example, contains the line for installing xdebug , but the module itself is not included (it is enabled only using CI).


Also for the composer globally put the package hirak/prestissimo , which greatly boosts the installation process of all dependencies.

In production, we mount inside it into the /app directory the contents of the /src directory from the source image. For development - we "prokidyvat" a local directory with source codes of the application ( -v "$(pwd)/src:/app:rw" ).


And here lies one difficulty - it is the access rights to files that are created from the container. The fact is that by default, the processes running inside the container are started from root ( root:root ), the files created by these processes (cache, logs, sessions, etc) are also, and as a result, you don’t have anything with them You can do this by not running sudo chown -R $(id -u):$(id -g) /path/to/sources .


As one of the solutions is to use fixuid , but this solution is straightforward "so-so." The best way I USER_ID local USER_ID and its GROUP_ID inside the container, and start processes with these values . By default, substituting 1000:1000 values ​​(default values ​​for the first local user) got rid of the $(id -u):$(id -g) call $(id -u):$(id -g) , and if necessary, you can always override them ( $ USER_ID=666 docker-compose up -d ) or put docker-compose in the .env file.


Also, when running php-fpm do not forget to disable opcache in it - otherwise the opcache is "what the hell is this!" you will be provided.


For the "direct" connection to redis and postgres, I’ve thrown additional ports "outward" ( 15432 and 15432 respectively), so there is no problem in principle to "connect and see what and how it really is".


I keep the container with the code name app running ( --command keep-alive.sh ) for convenient access to the application.


Here are some examples of solving "everyday" tasks with docker-compose :


OperationExecutable command
Installing the composer package$ docker-compose exec app composer require package/name
Running phpunit$ docker-compose exec app php ./vendor/bin/phpunit --no-coverage
Installing all node dependencies$ docker-compose run --rm node npm install
Install node-package$ docker-compose run --rm node npm i package_name
Launch of live asset reassembly$ docker-compose run --rm node npm run watch

All startup details can be found in the docker-compose.yml file .


Choi make alive!


Packing the same commands every time becomes boring after the second time, and since programmers are by their nature lazy creatures, let's take care of their β€œautomation”. To keep a set of sh scripts is an option, but not as attractive as a Makefile , especially since its applicability in modern development is greatly underestimated.


You can find the complete Russian-language manual on it at this link .

Let's see how running make in the root of the repository looks like:


 [user@host ~/projects/app] $ make help Show this help app-pull Application - pull latest Docker image (from remote registry) app Application - build Docker image locally app-push Application - tag and push Docker image into remote registry sources-pull Sources - pull latest Docker image (from remote registry) sources Sources - build Docker image locally sources-push Sources - tag and push Docker image into remote registry nginx-pull Nginx - pull latest Docker image (from remote registry) nginx Nginx - build Docker image locally nginx-push Nginx - tag and push Docker image into remote registry pull Pull all Docker images (from remote registry) build Build all Docker images push Tag and push all Docker images into remote registry login Log in to a remote Docker registry clean Remove images from local registry --------------- --------------- up Start all containers (in background) for development down Stop all started for development containers restart Restart all started for development containers shell Start shell into application container install Install application dependencies into application container watch Start watching assets for changes (node) init Make full application initialization (install, seed, build assets) test Execute application tests Allowed for overriding next properties: PULL_TAG - Tag for pulling images before building own ('latest' by default) PUBLISH_TAGS - Tags list for building and pushing into remote registry (delimiter - single space, 'latest' by default) Usage example: make PULL_TAG='v1.2.3' PUBLISH_TAGS='latest v1.2.3 test-tag' app-push 

He is very good at goal dependency. For example, to start watch ( docker-compose run --rm node npm run watch ) it is necessary for the application to be "raised" - all you have to do is set the target up as dependent - and you need not worry about what you will forget to do before calling watch - make will do everything for you. The same applies to running tests and static analyzers, for example, before committing changes - run make test and all the magic will happen for you!


Needless to say that for assembling images, downloading them, specifying --cache-from and everything else - no longer worry?


You can view the contents of the Makefile here .


Part automatic


Let's get to the final part of this article - this is an automation of the process of updating images in the Docker Registry. Although in my example GitLab CI is used - to transfer the idea to another integration service, I think it will be quite possible.


First of all, we define and name the image tags used:


Tag namePurpose
latestImages collected from the master branch.
The state of the code is the most "fresh", but not yet ready to be released.
some-branch-nameImages compiled in brunch some-branch-name .
Thus, we can β€œroll out” changes on any environment that were implemented only within a specific brunch before they were merged with the master branch β€” it’s enough to β€œpull out” images with this tag.
And - yes, the changes can affect both the code and the images of all services in general!
vX.XXActually, the release of the application (use to deploy a specific version)
stableAlias, for the tag with the most recent release (use to deploy the most recent stable version)

The release is done by posting a tag in the vX.XX format.


To speed up the build, the caching of the ./src/vendor and ./src/node_modules + --cache-from directories is used for the docker build , and consists of the following stages:


Stage namePurpose
preparePreparatory stage - assembling images of all services except the source image
testTesting the application (running phpunit , static code analyzers) using images collected at the prepare stage
buildInstalling all composer dependencies ( --no-dev ), webpack assets with webpack , and webpack image with source code including artifacts obtained ( vendor/* , app.js , app.css )

pipelines screenshot


Build on the master branch, producing push with the latest and master tags

On average, all the assembly steps take 4 minutes , which is a pretty good result (parallel execution of tasks is our everything).


You can familiarize yourself with the contents of the configuration ( .gitlab-ci.yml ) of the collector at this link .


Instead of conclusion


As you can see, organizing work with a php application (using the example of Laravel ) using Docker is not so difficult. As a test, you can fork the repository , and replacing all occurrences of tarampampam/laravel-in-docker with your own - try everything "live" on your own.


To start locally - execute only 2 commands:


 $ git clone https://gitlab.com/tarampampam/laravel-in-docker.git ./laravel-in-docker && cd $_ $ make init 

Then open http://127.0.0.1:9999 in your favorite browser.


… Take this opportunity to


At the moment I am working on a TL autocode project, and we are looking for talented php developers and system administrators (the development office is located in Yekaterinburg). If you consider yourself to be the first or the second - write our HR letter with the text "I want to develop a team, resume:% link_on_resume%" on email hr@avtocod.ru , we help with relocation.


')

Source: https://habr.com/ru/post/425101/


All Articles