
The idea of containerization appeared a long time ago, but Docker turned out to be the first technology that was able to achieve mass popularity. About why this happened, how much Docker "matured" in 3 years, and at the same time about when you can stop worrying and start using Docker in your production application, we talked with our experts:

Alexander
aatarasoff Tarasov - Software Architect at Alpha Laboratory. Currently he is implementing microservice architecture and moving the DevOps direction, and more than a year ago he
told about his experience in implementing Docker at Alfa-Bank.
')
Docker in production: You cannot use the tool just because it is fashionable.
- Why did you start using Docker?- It all started not with Docker, of course: last year we had a tectonic shift towards distributed systems and microservice architecture. As part of this process, we began to rework our system and move to a more lightweight API and UI, to develop distributed frontend systems.
At some point, we realized that with the introduction of new technologies, such as NodeJS, there is a question of deployment into test and pro-environment. There was a need for a tool that would allow to unify the ways of packing and delivering software to the client, while maximally facilitating the maintenance work, so that for us the Docker primarily performs encapsulation functions, hiding the implementation features of the application behind the unified API. This allows developers to more freely approach the choice of technologies, and the application to achieve the state that Kirill Tolkachev and I called “stressless architecture” at one of the speeches: the container immediately has everything necessary so that the software is properly launched and does not conflict with other software hosted in one cluster / machine, and support feels more confident when updating individual parts of the application.
For an example of a “stressful” architecture, you can look at the classic J2EE application: we depend on the Java version, the J2EE server version, and this imposes certain restrictions on us, and migration to new versions requires extensive testing and is performed quite rarely. Turning to Docker, we are changing this concept: since all the necessary dependencies are already inside our container, nothing prevents us from starting to use the new version of Java or the server.
- But after all, Docker does not eliminate testing during migration. What makes it easier for Docker?- Docker is built around the concept of a single application per container, so that you get away from the situation when you need to update a J2EE server, on which two dozen different components are embedded. Instead, you need to deal with isolated applications that use more lightweight embedded servers. This allows you to “post” an update in time (time scale) and migrate in pieces, instead of migrating everything at once.
- That is, Docker is actively pushing to switch to microservice architecture?- I would not say that pushes, but he fits very well into it. These two phenomena are very harmoniously complement each other.
We did not rebuild our system just to use Docker, on the contrary, we proceeded from the need to split the existing J2EE server at that time, and the decision to use Docker emerged from the tasks we faced. I think this is a very correct approach, because if you don’t know why you need technology, you cannot clearly understand what problems it solves, then you don’t need it. You can not use the tool just because it is fashionable.
- So you have a microservice architecture that never lived without a Docker? If you imagine a reverse situation, would you transfer a already running microservice application to Docker?- It is very difficult to talk about an abstract project in a vacuum. It is necessary to understand that Docker solves certain problems. In any case, we would need means of orchestration, management, and so on. If we had already written something ourselves, before migrating to Docker, it would be necessary to seriously weigh the advantages that Docker will bring us regarding the current state, and make decisions based on the ratio of benefits and costs of such migration.
- Did you consider any tools other than Docker?“When we chose a tool, there were no alternatives to Docker in fact, moreover, Docker a year and a half ago and Docker now are completely different things. At that time there were
LXC containers , but the Docker seemed more convenient in the first place from a developer’s point of view.
Now, in addition to Docker, a
Rocket project has appeared - also a containerization system with its own API.
At one point, many large companies realized that containerization is a very promising technology, and they created the
Open Container Initiative consortium, within which the
runC runtime is being developed. It is fully open source, allowing you to run Docker images and any other compatible types of containers. On the basis of runC, Docker itself is now working. So, if Docker's use is a vendor lock, very little.
“Once we have started talking about the vendor lock'e: using a clean Docker 'in production still doesn’t work well: accompanying services are required for orchestration, processing logs, and so on. Could it be that all such solutions will be created for Docker, and it will be difficult to replace it also for this reason?Of course, since Docker is now the market leader, all solutions in this area support Docker or have a focus on it. However, if we talk about
Marathon , which we use now, then this is just a framework for the
Mesosphere , which specializes in launching containers, or rather long-running tasks.
I think if Rocket will be popular, then people will write a new framework for the Mesosphere, which will support Rocket, or Rocket will write their own orchestration tools and use them.
- Is it impossible that all developers have to learn Docker?- The short answer is yes. The long one is that it is a matter of culture and that the developer is an engineer and is able to solve the problem in a complex.
Testing is a good analogy: developing unit tests is not a special person’s task, even integration tests can be the work of a developer. We believe that the engineer should create complete and verified solutions that are ready to work for the client. If you write tests on your software, run them on the CI server according to the delivery process, then you need a dedicated tester to work out test cases and test models, and not the testing process itself. Similarly in the world of containerization: engineers write deployment scripts for their software, they also collect containers and are able to run Docker images. Yes, this is the additional knowledge that they have, but this does not mean that they should be experts in system administration or fine-tune any of the system software, although such knowledge in related subject areas is never superfluous.
This is a matter of culture and cultural shift. Development is not just a code, for us development is the release of a turnkey solution that can be delivered to a client.
- Did it take a lot of tweaking to deploy Docker?- Docker's tweaking is a little bit, but on infrastructure software: different configuration storage systems (
Consul ,
Zookeeper ), orchestration tools (Mesosphere and Marathon), log processing (
Elasticsearch ,
Kibana ,
Kafka ) - it took quite a lot of time on them.
If the question is whether there is more work after integration with Docker, I would say that there is parity. Somewhere it becomes smaller due to independent and self-sufficient components, somewhere it is necessary to make corrections for Docker features, which need to be understood and taken into account during deployment.
As for the configuration of Docker itself, maybe we haven’t had such tasks yet, but we didn’t have to fine-tune the docker engine, we didn’t have to change the storage driver from application to application.
- Where is the Docker on the Gartner curve for your team?- I think we are on the plateau of productivity. There was quite a big euphoria due to the fact that Docker solves many problems and has a minimum of flaws, gradually we understood that there was no silver bullet, solved or circumvented the problems that appeared, and now we are able to effectively solve business problems using this technology.
Prod-ready. When you eat it, you are at least not disgusted, especially if you know how to cook

Andrei Filatov
lincore is a leading systems engineer for EPAM Systems, a cloud solutions specialist, Docker and DevOps fan.
- How do you use Docker?- Well, on one project, we have fully automated CI with Docker and Jenkins.
Docker is used for everything: slaves are launched in Docker containers, in other containers test environments are deployed, and the application itself is also launched in a Docker container.
- And the master?- No, the master is a big iron machine: when you need to build something small, run one or two builds, you can use Docker, but we have 20-30 processes running 24/7, and Docker is not very suitable, we would have rested performance. Even in the current configuration, we utilize our large iron machine to the maximum, with the fact that the master does only launch and orchestrate, everything is assembled on the Doker's slaves.
- How did you get to Docker?- Well, if we talk about this project, then when I came to it, there was no Docker there. There was a classic bundle: two half-empty slave'a, the majority of pipelines simply did not exist, all buttons were pressed by people. That is, from the point of view of CI, little has been done. I chose Docker as a tool, because I already knew that it would work well and utilize the resources that were at that time to the maximum.
- And if we talk about the use of Docker'a in the prode?- I can tell you about the Docker "with the taste of Amazon." Amazon has a service called
Elastic Contaiers Service . Using this tool in half an hour I wrote a bash-script that organizes Zero-downtime deployment. There, Docker is used under the hood: we have a machine that collects images and sends them to the registry in Amazon, and then the magic of ECS itself: create a task, choose a service and set how many copies should be raised, and that's all, just magic! It should be noted here that I have been familiar with Amazon for a long time and have become accustomed to JSON programming, but the very fact that in half an hour you can organize the delivery of the application from CI to deployment with gradual rollout and other features is important. Amazon provides all the necessary tools, to the extent that if you configure your metrics and
Auto Scaling correctly, then you don’t need to take care of anything at all: users begin to pile on - and new instances and new containers automatically rise on them.
- And how did you live before Docker?- How everyone lives: Jenkins, jenkins slaves, the slaves have ssh keys: they go to the machines, put out war, jar and so on in their places and restart services. With Docker, everything has become more flexible and portable: now we can, in principle, turn around anywhere, without changing anything - in fact, deploy Docker containers from images, without rebuilding anything.
- And how did the developers switch to Docker?- Linux developers and there the whole transition takes one line. It was necessary, however, for several quality assurance engineers to write on the wiki detailed instructions on how to download and install the Docker Machine, how to install the Docker Compose and how to make it all work under Windows, but everything is limited to installing the Docker Machine in the
Virtual Box , and then You can use cli-utilities. Docker is a very simple tool.
- When did you switch to Docker, did you watch other technologies? What was on the market at the time?- There were virtual machine implementations, a long-existing
Virtuozzo , but there were no convenient tools that would allow us to do what we did on Docker, and we would have to build everything with the help of a large
Vagrant configuration, but this solution quickly ceases to be portable .
We looked at
Hashicorp Otto , he gave hope, but he very much recently appeared at that moment, it was scary to use. So we did not have any alternatives to Docker, and even now, it seems to me, they do not exist, at least mature ones.
- How well are the platform and ecosystem developing?- The ecosystem is moving in the right direction, as far as Docker himself:
Swarm is very much under-performing, which is what Amazon gives when it comes to orchestration, but everything is pretty good. I even did a little R & D: it turned out that if the need arises, we can migrate a little blood to Swarm.
- Even without using third-party services like the mesosphere ?- Well, for our needs - yes, everything you need is in Swarm. We do not use complex networking, we do not use mutual integration of containers. We have a fairly simple infrastructure.
- Can you talk about her in a nutshell?- We have three layers: the entry point to
nginx , there are frontend-containers, there are several different types of APIs, information processors, and there are several services that communicate with databases. API services either read from the database themselves, or send requests to
redis , and those who deal with data and their modification take all the necessary information from redis. If we need to migrate from Amazon, where there is redis and a database as a service, then we will need to pick them up ourselves.
- Why not use Docker?- I think it makes no sense to put it in the number of crushers. If you have a reporting tool that digests huge amounts of data, then Docker's overhead will affect performance, and rendering farms don't make sense either. In general, where “raw performance” is needed, Docker is not very suitable. Even not so: there, where there is an opportunity to use virtual computers - Docker will approach ideally.
Well, as far as I know, Docker does not work with Windows containers, although the other day there was news that in 2016 the server will support docker containers, so Microsoft is working with the Docker team and, most likely, this will change soon .
- And if we speak not about the infrastructure, but about the applications themselves: have the approaches changed with the advent of Docker?- Hardly ever. We initially prepared for the fact that the infrastructure will be on small or medium virtual machines and should scale horizontally well, so Docker just replaced the lower level. Of course, in some areas added checks on the state in which the application. That is, if earlier we expected that the application or virtual machine could restart and the data would not be lost, now everyone understands that if the container dies, your data will die with it, and they need to be stored elsewhere, but redis and
postgresql solve this problem.
- How do Doker mature technology?- Prod-ready. When you eat it, you are at least not disgusted, especially if you know how to cook.
Docker is about tooling. Before him, there were cgroups, and zones, and a bunch of other technologies. But it was Docker that made them available and popular.
Sergey Egorov bsideup - Full Stack Engineer in the company ZeroTurnaround. He moved to Estonia in 2013, before that he was engaged in the development of games in various Russian companies. He loves Docker, build highly loaded systems, poking around in the Groovy compiler, as well as other OpenSource projects.
- How did you get to Docker? What projects does it have on you?In 2014, I worked at
Creative Mobile and was responsible for developing and deploying servers in our department. It was a social project. games, with peaks of up to 1000 requests per second.
At that time, I used Puppet, which I deployed a JBoss AS cluster (later Wildfly) in the EC2 Autoscaling Group, but, unfortunately, this solution was extremely shaky due to the fact that to quickly deploy servers (and we had Continuous Delivery with dozens of layouts per day) had to sacrifice the immunity of layouts.
As a result, sometimes there was a situation where the old packages / configuration (which was in XML format, read as "one of the most inconvenient formats for changing from the command line") conflicted with the new ones.
Began to study the options Immune Deployees. Since hosted on AWS, at that time there was a choice between OpsWorks and ElasticBeanstalk. After a long study, the choice fell on ElasticBeanstalk, where I noticed the opportunity to use “some kind of Docker version 0.9”, which was as an option “if none of the previously announced solutions suits you, then here’s an abstract system for declaring how you want run your application. " Immediately it was useful to study what it is and how to cook, and the very next day our production ran in the Docker. Yes, yes, Docker in 2014, even before we switched to it in local development.
After that, I successfully integrated Docker on several other high-load Creative Mobile projects. Turning to
TransferWise , I actively promoted the idea of dozerization, but since it was a time of active migration to microservices in the company, I didn’t have to explain the advantages for a long time, except that I had to fight with the tellers :)
- There is an opinion that in some scenarios Docker does not provide high performance due to overhead costs. What do you think about this?I think this is one of the most popular Docker tales. Personally, I only believe in numbers in such matters and personal experience. And, if there is little trust in the second, the numbers do not lie, and I recommend paying attention to the research of major market players, such as
IBM , for example.
Please note the document dated July 2014, and after two years the Docker team made a huge amount of improvements, including performance.
For example, in 2016, Percona
measured Docker's impact on the performance of an IO database . The result - did not even begin to lay out beautiful graphics, because The results were identical to the same, but on bare metal.
I got about the same results in my own practice, driving everything in Docker, from the local development environment to the build of systems (yes, we are driving Jenkins-master in Docker) and production.
- Why not use Docker?At the current time, despite the huge number of solutions, it is a pain to drive stateful applications (databases with permanent storage, all kinds of hot-start caches, non-scalable backends). The pain is not because of Docker as such, but because the applications themselves are not ready to be running on another host, with the file system migrated, and so on.
Also, if your application is actively working with iron, then throwing iron into the container is still considered something non-trivial. This is possible, but tooling (and I believe that Docker is not about containerization, but about tooling) is not ready for this out of the box, so that it will be quite simple.
- You and Docker have been working for several years. Are you considering alternatives? Do they exist at all?There are alternatives and there will always be some just don't like the mainstream :)
Also, major market players are concerned that they do not have enough influence on the development of Docker.
Fortunately, the guys from Docker Inc. far from stupid, and their attempts to standardize everything in the form of the Open Container Initiative, for example, only demonstrate that they do not want to fight for the container market, their market is first-class tooling.
Finally, I can not share my Docker-pride: D

If you want to learn more about Continuous Delivery, orchestration and containerization, you are likely to be interested in the following reports
Joker 2016 (St. Petersburg, October 14-15).