Most of us have learned how to cook Docker a long time ago and use it on local machines, on test benches and on combat servers. Docker, which recently turned into Moby , has firmly entered the process of delivering code to the user. But best practice of working with container virtualization and, in particular, with Docker are still being developed.
At the beginning of the development of Docker as the main process isolation tool, many used it in a manner similar to using virtual machines. The approach was as simple as possible: we install all the necessary dependencies into the image ( Docker Image ), in the same build, everything that needs to be built and what should not move and build , we get the artifact of the assembly and bake it all into the final image.
This approach has obvious drawbacks : software that is needed for building is not always needed for work, for example, you need a compiler to build a program in C ++ or Go , but the resulting binary can be run without a compiler. In this case, the software required for the assembly, can weigh much more than the resulting artifact.
The second drawback follows from the first: more software in the final image - more vulnerabilities , which means we lose the security of our services.
Today it is common practice to separate the image for assembly from the image for launch .
It looks and is used like this:
build.Dockerfile
and collect the so-called buildbox-image
from this file. # : # # -f — Dockerfile, ( "build.Dockerfile") # -t — , ( "buildbox-image") # docker build -f build.Dockerfile -t buildbox-image .
buildbox-image
to build the service. To do this, when launching, we start the source inside the container and start the build. ( In the example, the build starts with the make build
) # : # # --rm — # -v — "/app" # docker run --rm -v $(pwd):/app -w /app buildbox-image make build
$(pwd)/bin/myapp
, we can simply bake it inside the image with the minimum amount of software. To do this, next to the build.Dockerfile
put the Dockerfile
, which will be used to start the service in combat. This Dockerfile
may look like this: FROM alpine:3.5 COPY bin/myapp /myapp EXPOSE 65122 CMD ["/myapp"]
The Dockerfile separation approach has proven itself, but the separation itself is a rather routine and not always enjoyable task, so ideas for simplifying the process have been around for a long time.
I heard about the idea of build-stages inside the Dockerfile from the guys from Grammarly . They implemented the assembly stages in the facade above Docker a long time ago and named it Rocker . But in the Docker Engine itself there was no such functionality.
And here, in Docker, finally, they pulled a pull request that implements the build stages ( https://github.com/moby/moby/pull/32063 ), now they are available in version v17.05.0-ce-rc2
https: // github .com / moby / moby / releases / tag / v17.05.0-ce-rc2
Now separate Dockerfiles for the build are no longer needed, since it became possible to separate the build stages in a single Dockerfile
.
In the build stage it is possible to perform all operations related to the build, and only send the artifact to the next stage, from which we will receive an image with only the body kits required for the service to work.
As an example, take the service at Golang. Dockerfile
this service with the separation of stages in the general case may look like this:
# "build-env" FROM golang:1.8.1-alpine AS build-env # , RUN apk add --no-cache \ git \ make ADD . /go/src/github.com/username/project WORKDIR /go/src/github.com/username/project # RUN make build # -------- # image FROM alpine:3.5 # "build-env" COPY --from=build-env /go/src/github.com/username/project/bin/service /usr/local/bin/service EXPOSE 65122 CMD ["service"]
Build results:
REPOSITORY TAG IMAGE ID CREATED SIZE registry.my/username/project master ce784fb88659 2 seconds ago 16.5MB <none> <none> 9cc9ed2befc5 6 seconds ago 330MB
330MB at build, 16.5MB after build and ready to run. All in one Dockerfile with minimal configuration.
In the system, the build stage is saved to disk as <none>:<none>
.
It is possible to use more than two stages , for example, if you collect the backend and the frontend separately. It is not necessary to inherit from the previous step, it is quite legal to run a step with the new parent. If the image of the parent is not found on the machine, then Docker will load it at the moment of transition to the step. Each FROM
instruction clears all previous commands.
Here is an example of how you can use several stages of assembly:
# "build-env" FROM golang:1.8.1-alpine AS build-env ADD . /go/src/github.com/username/project WORKDIR /go/src/github.com/username/project # RUN make build # -------- # "build-second" FROM build-env AS build-second RUN touch /newfile RUN echo "123" > /newfile # -------- # frontend "build-front" FROM node:alpine AS build-front ENV PROJECT_PATH /app ADD . $PROJECT_PATH WORKDIR $PROJECT_PATH RUN npm run build # -------- # image FROM alpine:3.5 # "build-env" COPY --from=build-env /go/src/github.com/username/project/bin/service /usr/local/bin/service # "build-front" COPY --from=build-front /app/public /app/static EXPOSE 65122 CMD ["service"]
To select the build stage, it is suggested to use the --target
flag. With this flag assembly is carried out to the specified stage. ( Including all previous ones ) In this case the disk will be saved and tagged with this tag at this stage.
Release 17.05.0 is scheduled for 2017-05-03 . And as far as can be judged, this is really useful functionality, especially for compiled languages.
Thanks for attention.
Source: https://habr.com/ru/post/327698/
All Articles