📜 ⬆️ ⬇️

Recommendations for security when working with Docker

Docker accelerates development and deployment cycles, and thus allows you to issue ready-made code in an incredibly short time. But this medal has a downside - security. It is worth knowing about a number of things that Docker affects the security, and it is about them that will be discussed in this article. We will look at 5 typical situations in which images deployed on Docker become a source of new security problems that you might not have taken into account. We will also look at cool tools to solve these problems and give advice that you can use to make sure that all the hatches are battened down in case of deployment.


1. Reliability of the image


Let's start with a problem that is perhaps an integral part of the very nature of Docker - the authenticity of the image.


If you have ever used Docker, then you should be aware that with its help you can place containers on almost any image — like an image from the official list of supported repositories , such as NGINX, Redis, Ubuntu, or Alpine Linux, and on any other.


As a result, we have a huge selection.


If one container does not solve all your problems, you can replace it with another. But is this the safest approach?


If you do not agree with me, let's consider this question from the other side.


When you develop an application, the package manager makes it easy to use someone else's code, but is it worth using someone else's code in development? Or should any code that you did not analyze be treated with a healthy level of suspicion? If security means at least something to you, then I would always carefully check the code before I integrate it into my application.


I'm right?


Well and with the same suspicion it is necessary to concern Docker-containers.


If you do not know the author of the code, how can you be sure that the container you choose does not contain the binaries of other malicious code?


True, there can be no confidence here.


Under these conditions, I can give three tips.


Use private or trusted repositories (trusted repositories)


First, you can use private or trusted repositories, such as Docker Hub trusted repositories .


In the official repositories you can find the following images:



What distinguishes the Docker Hub from other repositories, among other things, is that the images are always scanned and scanned by the Docker's Security Scanning Service .


If you have not heard of this service, then here is a quote from its documentation:


Docker Cloud and Docker Hub can scan images in private repositories to make sure that there are no known vulnerabilities in them. After that, they send a report on the scan results for each tag image.

In the end, if you use official repositories, you will know that the containers are safe and do not contain malicious code.


The option is available for all paid rates. At the free rate it is also there, but with a time limit. If you are already at a paid rate, then you can use the scan function to check how safe your custom containers are and whether they have any vulnerabilities you are not aware of.


Thus, you can create a private repository and use it within your organization.


Use Docker Content Trust


Another tool to use is the Docker Content Trust .


This is a new feature available in Docker Engine 1.8. It allows you to verify the owner of the image.


A quote from an article about the new release , by Diogo MĂłnica, Docker Lead Security Specialist:


Before an author publishes an image in a remote registry, the Docker Engine signs this image with the author's private key. When you download this image to yourself, Docker Engine uses the public key to verify that this image is the one that the author has posted, that it is not a fake and that it has all the latest updates.

Summarize. The service protects you from fakes, replay attacks and compromising your keys. I strongly recommend that you read this article and the official documentation.


Docker bench security


Another tool I’ve recently used is Docker Bench Security . This is a large collection of recommendations for deploying containers in production.


The tool is based on recommendations from the CIS Docker 1.13 Benchmark , and is used in 6 areas:



To install it, clone the repository with


git clone git@github.com:docker/docker-bench-security.git 

Then enter cd docker-bench-secutity and run the following command:


  docker run -it --net host --pid host --cap-add audit_control \ -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \ -v /var/lib:/var/lib \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /etc:/etc --label docker_bench_security \ docker/docker-bench-security 

Thus, you will collect the containers and run a script that will check the security of the host machine and its containers.


Below is an example of what you will get at the exit.


Docker Security Benchmark Sample


As you can see, the output is a precise detail with color coding, in which you can see all the checks and their results.


In my case, some corrections are needed.


What I particularly like about this function is that it can be automated.


As a result, you get a feature that is included in the continuous documentation cycle and helps to verify the safety of containers.


2. Extra Powers


Now the next moment. As far as I remember, the question of excess authority always took place. What at the time of installation of Linux distributions on bare-metal servers, that now, when they are installed as guest operating systems inside virtual machines.


The fact that we now install them inside the Docker container does not mean that they have become significantly safer.


In addition, the level of complexity has increased with Docker, because now the boundary between guest and host is unclear. Regarding Docker, I concentrate on two things:



Regarding the first point: you can run the Docker container with the privileged option, after which this container will have extended permissions.


Quote from the documentation:


The container gains access to all capabilities, and all restrictions caused by the cgroup controller are removed. In other words, the container can now do almost everything the same as the host. This technique allows you to perform special scenarios, such as launching a Docker inside a Docker.

If the very idea of ​​such an opportunity does not make you slow down, then I will be surprised and even worried.


Well, really, I have no idea why you should use this option, even with extreme caution, unless you have a very special case.


With such data, please be very careful if you are still going to use it.


Quote from Armin Braun :


Do not use privileged containers unless you are handling them like any other process running in the root.

But even if you do not run the container in privileged mode, one or more of your containers may have extra features.


By default, Docker runs containers with a rather limited set of features.
However, these powers can be extended with a custom profile.


Depending on where you host your Docker containers, including vendors such as DigitalOcean, sloppy.io, dotCloud and Quay.io, their default settings may differ from yours.


You can also host yourself, and in this case it is also important to validate the privileges of your containers.


Discard unnecessary privileges and features.


It doesn't matter where you are hosted. As stated in the Docker security guide:


It’s best for users to drop all features except those
which are necessary for their processes.

Think about these questions:



If not, disable these features.


However, does your application need any special features that are not required by default for most applications? If so, connect these features.


In this way, you limit the ability of intruders to damage your system, because they simply will not have access to these functions.


To do this, use the options --cap-drop and --cap-add .


Suppose your application does not need to change the process capabilities or bandage privileged ports, but you need to load and unload kernel modules. The corresponding features can be removed and added like this:


 docker run \ --cap-drop SETPCAP \ --cap-drop NET_BIND_SERVICE \ --cap-add SYS_MODULE \ -ti /bin/sh 

More detailed instructions can be found in the Docker documentation: “ Runtime privilege and Linux capabilities ”


3. System security


Well, well, you use a proven image and have reduced or removed extra powers from your containers.


But how safe is this image?


For example, what rights will intruders have if they suddenly gain access to your containers? In other words, how much did you secure your containers?


If it is so easy to get into your container, then it means that you can just as well stuff everyone there? If so, then it's time to strengthen your container.


Docker is certainly safe by default, thanks to the namespace cgroup , but you should not trust in these functions selflessly.


You can go ahead and use other security tools on Linux, such as AppArmor , SELinux , grsecurity and Seccomp .


Each of these tools is well thought out and proven in combat, and will help you further enhance the security of your container host.


If you have never used these tools, then here is a brief overview of each of them.


AppArmor


This is a Linux kernel security module that allows a system administrator to limit the program’s capabilities using individual program profiles. Profiles can issue permissions for actions such as read, write and execute for files on the matching paths. AppArmor provides mandatory access control (MAC) and thus serves as a good addition to the traditional Unix discretionary access control (DAC) model. AppArmor has been included in the main Linux kernel since version 2.6.36.

Source: Wikipedia .


SELinux


Security-Enhanced Linux (SELinux) - Linux with Advanced Security) is an implementation of a forced access control system that can work in parallel with the classic selective access control system.

Source - Wikipedia .


Grsecurity


This is a Linux project that includes some security-related enhancements, including forced access control, randomization of key local and network informational data, /proc and chroot() jail restrictions, network socket control, capacity control, and additional audit features. Typical applications are web servers and systems that accept remote connections from questionable places, such as servers that provide shell access for users.

Source - Wikipedia .


Seccomp


This is a computer security object in the Linux kernel. In version 2.6.12, published March 8, 2005, it was combined with the main Linux kernel. Seccomp allows you to transfer the process to "safe" mode, from which no system calls can be made, except for exit() , sigreturn() , read() and write() already open file descriptors. If a process tries to make any other system calls, then the kernel kills the process with SIGKILL . Thus, Seccomp does not virtualize system resources, but simply isolates the process from them.

Source: Wikipedia .


Since the article is still about something else, it will not be possible to show these technologies in working examples and give a more detailed description of them.


But still, I highly recommend learning more about them and integrating them into your infrastructure.


4. Limit the consumption of available resources.


What does your application need?


Is this a completely lightweight application that consumes no more than 50Mb of memory? Then why give him more? Does the application perform more intensive processing that requires 4+ CPUs? Then give him access to them, but no more.


If you include analysis, profiling, and benchmarking in a continuous development process, then you need to know what resources your application needs.


Therefore, when you deploy containers, make sure that they have access only to the most necessary.


To do this, use the following commands for Docker:


  -m / --memory: #    --memory-reservation: #     --kernel-memory: #     --cpus: #   CPU --device-read-bps: #        

Here is an example config from the Docker official documentation :


  version: '3' services: redis: image: redis:alpine deploy: resources: limits: cpus: '0.001' memory: 50M reservations: memory: 20M 

More information can be found using the docker help run or in the “ Runtime constraints on resources ” section of the Docker documentation.


5. Large attack surface


The last security aspect to consider is a direct consequence of how Docker works — and this is potentially a very large attack surface. Any IT organization is subject to such risks, but especially one that relies on the ephemeral nature of container infrastructure.


Since Docker allows you to quickly create and deploy applications and just as quickly remove them, it is difficult to keep track of exactly which applications are deployed in your organization.


Under such conditions, potentially much more elements of your infrastructure can be attacked.


You are not aware of application deployment statistics in your organization? Then ask yourself the following questions:



I hope you are not very difficult to answer these questions. In any case, let's consider what actions can be taken in practice.


Implement an audit trail with correct logging.


Inside the application, user actions are usually recorded, such as:



In addition to these actions, you should keep records of actions for each container that is created and deployed in your organization.


It is not worth overly complicating this account. Records should be kept of such actions as:



Most continuous development tools should be able to record this information - this option should be available either in the tool itself or using custom scripts in a specific programming language.


In addition, it is worthwhile to implement notifications by mail or in any other way (IRC, Slack, or HipChat). This technique will make sure that everyone can see when what is unfolding.


Thus, if something unbecoming happened, hide it will not work.


I do not urge you to stop trusting your employees, but it is better to always be aware of what is happening. Before I finish this article, please do not misunderstand me.


I’m not suggesting you dive overboard and get bogged down in creating many new processes.


Such an approach, most likely, will only deprive you of the advantages that the use of containers gives you, and it will be completely unnecessary.


And yet, if you at least think about these questions and regularly give them time afterwards, you will be better informed and will be able to reduce the number of white spots in your organization that may be attacked from the outside.


Conclusion


So, we looked at five Docker security issues and a number of possible solutions for them.


I hope that you are switching or have already switched to Docker, you will keep them in mind and will be able to provide the necessary level of protection for your applications. Docker is an amazing technology, and what a pity it did not appear before.


I hope that the information presented in this article will help you protect yourself from all unexpected problems.


about the author


Matthew Setter is an independent developer and technical writer . He specializes in creating test applications and writes about modern development methods, including continuous development, testing and security.




This article is a translation of Docker Security Best Practices.


')

Source: https://habr.com/ru/post/333402/


All Articles