📜 ⬆️ ⬇️

Why don't you need sshd in a Docker container

When people launch their first Docker container, they often ask: “How do I get into the container?” And answer “on the forehead” to this question, of course: “So launch an SSH server in it and connect!”. The purpose of this topic is to show that you really do not need sshd inside your container (well, of course, except when your container itself is intended to encapsulate an SSH server).

Starting an SSH server is a tempting idea, as it gives quick and easy access "inside" the container. Everyone knows how to use SSH-clients, we do it every day, we are familiar with access by passwords and by keys, port forwarding, and, in general, access via SSH is a well-known thing that will work for sure.

But let's think again.

Let's imagine that you are building a Docker image for Redis or a Java web service. I would like to ask you some questions:
')
Why do you need ssh?
Most likely you want to make backups, check logs, maybe restart processes, edit settings, debug something with gdb, strace or similar utilities. So, this can be done without SSH.

How will you manage your keys and passwords?
There are not many options - either you sew them tightly into an image, or you can put them on an external volume. Think about what you need to do to update keys or passwords. If they are stitched, you will have to rebuild the image, redo it, restart the containers. Not the end of the world, but somehow not elegant. Significantly the best solution is to put the data on the external volume and control access to it. This works, but it is important to check that the container does not have access to write to this volume. After all, if the access is - the container can damage the data, and then you can not connect via SSH. Worse, if one volume is used as a means of storing data for authentication in several containers, you will lose access to all of them at once. But this is only if you use SSH access everywhere.

How will you manage security updates?
An SSH server is actually quite a reliable thing. But all the same, this is a window to the outside world. So we will need to install updates, monitor security. Those. in any innocuous container itself, we will now have an area potentially vulnerable to outside cracking and requiring attention. We have created a problem for ourselves.

Is it enough to “just add an SSH server” for everything to work?
Not. Docker manages and oversees one process. If you want to manage multiple processes inside a container, you need something like Monit or Supervisor. They also need to be added to the container. Thus, we transform the simple concept of “one container for one task” into something complex that needs to be built, updated, managed and maintained.

You are responsible for creating a container image, but are you also responsible for managing container access policies?
In small companies, it does not matter - most likely you will perform both functions. But when building a large infrastructure, most likely one person will create images, and completely different people will be involved in managing access rights. It means that sticking an SSH server into a container is not the best way.

But how can I ...



Make backups?

Your data should be stored on an external volume. After that you can start another container with the option --volumes-from, which will have access to the same volume. This new container will be specifically for performing data backup tasks. Separate profit: in the case of updating / replacing backup tools and data recovery, you do not need to update all the containers, but only the one that is designed to perform these tasks.

Check logs?

Use an external volume! Yes, the same decision again. Well, what can you do if it fits? If you write all the logs to a specific folder, and it is on the external volume, you can create a separate container (“log inspector”) and do everything you need in it. Again, if you need any special tools for analyzing the logs - you can install them in this separate container without littering the original one.

Restart my service?

Any properly designed service can be restarted using signals. When you execute the foo restart command, it almost always sends a specific signal to the process. You can send a signal using the docker kill -s command . Some services do not respond to signals, but accept commands, for example, from a TCP socket or a UNIX socket. You can connect to a TCP socket from the outside, and for a UNIX socket, again, use an external volume.

“But this is all difficult!” - no, not really. Let's imagine that your foo service creates a socket in /var/run/foo.sock and requires you to run fooctl restart for a proper restart. Just start the service with -v / var / run (or add the / var / run volume to the Dockerfile). When you want to restart the service, run the same image, but with the --volumes-from key. It will look something like this:

#   CID=$(docker run -d -v /var/run fooservice) #       docker run --volumes-from $CID fooservice fooctl restart 


Edit configuration?

First, you should distinguish the operational configuration changes from the fundamental. If you want to change something significant, which should affect all future containers launched on the basis of this image - the change must be sewn into the image itself. Those. in this case, you do not need an SSH server, you need to edit the image. “But what about operational changes?” You ask. “After all, I may need to change the configuration in the course of my service, for example, add virtual hosts to the web server config?”. In this case, you need to use ... wait, wait ... external volume! The configuration should be on it. You can even pick up a special container with the role of “configs editor”, if you want, install your favorite editor there, plug-ins for it, anything. And this all will not affect the base container.
“But I only make temporary edits, experiment with different values ​​and look at the result!” Ok, for the answer to this question read the following section.

Debug my service?

And so we got to the case where you really need a real console access "inside" of your container. You need to run gdb, strace somewhere, edit the configuration, etc. And in this case you need nsenter .

What is nsenter


nsenter is a small utility that allows you to get inside namespaces ( namespaces ). Strictly speaking, it can both enter already existing namespaces and launch processes in new namespaces. "What kind of namespace are we talking about?" This is an important concept associated with Docker containers, allowing them to be independent of each other and of the parent operating system. If you don’t go into details: with nsenter you can get console access to an existing container, even if there is no SSH server inside it.

Where to get nsenter?

From GitHub: jpetazzo / nsenter . Can run
 docker run -v /usr/local/bin:/target jpetazzo/nsenter 


This will install the nsenter in / usr / local / bin and you can immediately use it. In addition, in some distributions nsenter is already embedded.

How to use it?

First, find out the PID of the container you want to get inside:
 PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>) 


Now go to the container:
 nsenter --target $PID --mount --uts --ipc --net --pid 


You will get a console access "inside" the container. If you want to immediately run a script or program, add them with an argument to nsenter . It works a bit like chroot , with the only difference being with regards to containers, and not just directories.

How about remote access?


If you need remote access to the docker, you have at least two ways to do this:


The first way is quite simple, but it requires root rights on the host machine (which is not very good from the security point). The second way involves the use of the special feature " command " SSH authorization keys. You must have seen the “classic” authorized_keys like this:

 ssh-rsa AAAAB3N…QOID== jpetazzo@tarrasque 

(Of course, the real key is much longer.) That’s where you can specify a specific command. If you want to allow a certain user to check the amount of free RAM on your machine using SSH access, but do not want to give him full access to the console, you can write the following to authorized_keys :

 command="free" ssh-rsa AAAAB3N…QOID== jpetazzo@tarrasque 


Now, when the user connects using this key, the free command will be immediately launched. And nothing else can be started. (Technically, you might want to add no-port-forwarding , see the details in the manpage for authorized_keys ). The idea of ​​this mechanism in the separation of powers and responsibilities. Alice creates container images, but does not have access to production servers. Betty has the right to remote access for debugging. Charlotte - only to view the logs. Etc.

findings


Is it really that awful to launch an SSH server in each Docker container directly? Let's be honest - this is not a disaster. Moreover, it may even be the only option when you do not have access to the host system, but you certainly need access to the container itself. But, as we saw from the article, there are many ways to do without an SSH server in the container, having access to all the necessary functionality and at the same time getting a very more elegant system architecture. Yes, in the docker, you can do both. But before turning your Docker-container into such a “mini-VPS”, make sure that this is really necessary.

Source: https://habr.com/ru/post/237737/


All Articles