
Somehow, at one point, I decided to write an article about the delivery of containers in the form of docker containers and deb-packages, but when I started, for some reason, I suffered the first personal computers and even calculators in the distant times. In general, instead of dry comparisons of the docker and the deb, we got here such reflections on the topic of evolution, which I submit to your judgment.
Any product, whatever it may be, must somehow get to the product servers, must be configured and running. That's what this article will be about.
I will reflect in a historical context, “what I see - I sing about”, what I saw when I started writing code and what I observe now, what we are using at the moment and why. The article does not claim to be a full-fledged study, some points are missed, this is my personal view of what was and is now.
')
So, in the good old days ... the earliest delivery method that I found were tape cassettes. I had a computer BK-0010.01 ...
The era of calculators
No, there was an even earlier moment, there was another calculator
MK-61 and
MK-52 .

So, when I had the
MK-61 , then the method of transferring the program was a regular piece of paper in the box on which the program was recorded, which, if necessary, was manually recorded in the calculator. If you want to play (yes, yes, even on this antediluvic calculator there were games) - you sit down and enter the program into the calculator. Naturally, when you turn off the calculator, the program went into oblivion. In addition to the calculator codes written on paper, the programs were published in the magazines “Radio” and “Technique of Youth”, as well as published in the books of that time.
The next modification was the
MK-52 calculator, it already had some sort of non-volatile data storage. Now it was not necessary to drive a game or program manually, but, having done some magic passes with the buttons, it loaded itself.
The size of the largest program in the calculator was 105 steps, and the size of the permanent memory in the MK-52 was 512 steps.
By the way, if there are fans of these calculators, who reads this article - in the process of writing the article, I found both a calculator emulator for android, and programs for it. Forward to the past!
A small digression about MK-52 (from wikipedia)
MK-52 flew into space aboard the Soyuz TM-7. It was supposed to be used to calculate the landing trajectory in case the on-board computer fails.
MK-52 with the block of memory expansion "Electronics-Astro" since 1988 was supplied to the ships of the Navy as part of the navigator's computing kit.
First personal computers

Let's return to the times of
BK-0010 . It is clear that there was more memory there, and it was not at all an option to drive the code from the paper (although at first I did just that because there was simply no other carrier). Audio cassettes for tape recorders become the main means of storing and delivering software.

Storage on the tape was usually in the form of one or two binary files, everything else was contained inside. Reliability was very nasty, I had to keep 2-3 copies of the program. Loading time was also not happy, enthusiasts experimented with different frequency coding to overcome these shortcomings. I myself at that time had not yet been engaged in the professional development of software (not counting simple programs in BASIC), so I will not tell you in detail how everything was arranged inside. The mere fact of having only RAM on a computer for the most part determined the simplicity of the data storage scheme.
The emergence of reliable and large storage media
Floppy disks appear later, the copying process is simplified, and reliability grows.
But the situation changes dramatically, only when there are sufficiently large local storage in the form of HDD.
The type of delivery changes fundamentally: installer programs appear that control the system configuration process, as well as sweep after deletion, because the programs are not simply read into memory, but are already copied into the local storage, from which you need to be able to clean up the unnecessary if necessary.
In parallel, the complexity of the supplied software increases.
The number of files in the delivery increases from one to hundreds and thousands, conflicts of library versions and other joys begin when different programs use the same data.

At that time, the existence of Linux was not yet revealed to me, I lived in the MS DOS world and, later, Windows, but wrote on Borland Pascal and Delphi, sometimes looking towards C ++. In those days, many people used InstallShield at
en.wikipedia.org/wiki/InstallShield , which successfully solved all the tasks of software deployment and configuration.
Epoch Internet
Gradually, the complexity of software systems is even more complicated, and the transition to distributed systems, thin clients and microservices takes place from monolith and dextup applications. Now you need to configure more than one program, and their set, and so that they are friends all together.
The concept has changed completely, the Internet has come, the era of cloud services has arrived. It is still interpreted in the initial stage, in the form of sites, no one dreamed of services especially. but this was a turning point in the development and application delivery industry.
For myself, I noted that at this moment there was a change of generations of developers (or it was only in my environment), and it seemed that all the good old delivery methods were forgotten at one moment and everything started from the very beginning: they began to make all the delivery Knee scripts and called it proudly "Continuous delivery". In fact, a period of mess has begun, when the old is forgotten and not used, and the new is simply not there.
I remember the times when we worked in the company where I then worked (I will not name it), instead of doing the build via ant (maven was not yet popular or not at all), people just collected jar in IDE and serenely commit him in svn. Accordingly, the deployment consisted in retrieving a file from SVN and copying it over SSH to the desired machine. That's so simple and clumsy.
At the same time, the delivery of simple PHP sites was done quite primitively by simply copying the corrected file via FTP to the target machine. Sometimes there was no such thing - the code was ruled live on the grocery server, and it was special chic if there were backups somewhere.
RPM and DEB packages

On the other hand, with the development of the Internet, UNIX-like systems began to gain more and more popularity, in particular, it was at that time that I discovered RedHat Linux 6 for myself, about 2000. Naturally, there were also certain means for software delivery, according to Wikipedia, RPM as the main package manager appeared already in 1995, in the version of RedHat Linux 2.0. And from then until now, the system comes in the form of RPM-packages and quite successfully exists and develops.
The distributions of the Debian family went the same way and implemented the delivery in the form of deb packages, which is also consistently up to now.
Package managers allow you to deliver the software products themselves, configure them during the installation process, manage dependencies between different packages, remove products and clean up the extra during the uninstallation process. Those. for the most part this is all that is needed, which is why they lasted for several decades with little or no change.
Cloudiness added to the package managers installation not only from physical media, but also from cloud repositories, but fundamentally little has changed.
It is worth noting that at present there are some inclinations towards avoiding deb and switching to snap-packages, but more on that later.
So, this new generation of cloud developers, which neither DEB nor RPM knew, also grew slowly, gained experience, products became more complicated, and some more reasonable delivery methods were needed than FTP, bash scripts and similar student crafts.
And here Docker comes on the scene, a kind of mix of virtualization, resource boundaries and delivery methods. It is now fashionable, youth, but is it needed for everything? Is it a panacea?
According to my observations, very often Docker is not offered as a rational choice, but simply because, on the one hand, this is talked about in the community, and those who offer it know it only. On the other hand, the good old packaging systems for the most part are silent - they are and are, they do their work quietly and unnoticed. In such a situation there is no other choice - the choice is obvious - Docker.
I'll try to share my experience, how we implemented Docker, and what happened as a result.
Custom scripts
Initially there were bash-scripts that deploy jar-archives on the necessary machines. Managed this Jenkins process. It worked successfully, since the jar-archive itself is already an assembly, containing classes, resources, and even a configuration. If you put everything in it to the maximum - then expanding it with a script is not the most difficult thing to do.
But scripts have several drawbacks:
- Scripts are usually written in haste and are therefore so primitive that they contain only one most prosperous script. This is facilitated by the fact that the developer is interested in speedy delivery, and a normal script requires the investment of a decent amount of resources
- as a consequence of the preceding paragraph, the scripts do not contain an uninstall procedure
- no upgrade procedure installed
- when a new product appears, write a new script
- no dependency support
Of course, you can write a fancy script, but, as I wrote above, this is the time for development, and not the least, and time, as you know, is always not enough.
All this obviously limits the scope of application of such a deployment method only to the simplest systems. The time has come to change it.
Docker

At some point, freshly baked midli began to arrive, seething with ideas and delirious dockers. Well, the flag in hand - do it! There were two attempts. Both failed, so to speak, because of big ambitions, but lack of real experience. Was it necessary to force and finish by any means? Hardly - the team must evolutionarily grow to the desired level before it can use the appropriate tools. On top of that, using ready-made images of the docker, we often faced the fact that the network worked there incorrectly (which might have also been related to the dampness of the docker itself) or it was difficult to expand other people's containers.
What inconveniences have we encountered?
- Network problems in bridge mode
- It is inconvenient to watch the logs in the container (if they are not transferred to the host file system separately)
- Periodically strange suspension of ElasticSearch inside the container, the reason was not established, the container is official
- It is tricky to use the shell inside the container - everything is severely curtailed, there are no usual tools
- Large collectable containers - expensive to store
- Due to the large size of the containers, it is difficult to maintain multiple versions.
- Longer build, unlike other methods (scripts or deb-packages)
On the other hand, why the Spring-service in the form of a jar-archive is worse to deploy through the same deb? Is resource isolation really necessary? Is it worth losing convenient tools of the operating system, stuffing the service into a heavily trimmed container?
As practice has shown, in reality this is not necessary, the deb-package is enough in 90% of cases.
When all the same good old deb does not work, and when did we really need a docker?
For us, this was the deployment of services in python. Many libraries needed for machine learning and not in the standard delivery of the operating system (and the fact that there were not those versions), hacks with settings, the need for different versions for different services living on the same host system led to that the only sensible way to supply this nuclear mix was the docker. The complexity of assembling the docker-container turned out to be lower than the idea to pack it all into separate deb-packages with dependencies, and actually no one in their right mind would have taken it.
The second point where the docker is planned to be used is to deploy services according to the blue-green deploy scheme. But here I want to get a gradual increase in complexity: deb packages are collected first, and then a docker container is assembled from them.
Snap packages

Let's return to snap packages. For the first time they officially appeared in Ubuntu 16.04. Unlike the usual deb packages and rpm packages, snap carry all the dependencies. On the one hand, this avoids the conflict of the libraries, on the other hand, it is a more significant size of the resulting package. In addition, the same can affect the security of the system: in the case of the delivery of a snap, all changes to the included libraries should be followed by the developer himself who creates the package. In general, not everything is so unequivocal and universal happiness from their use does not occur. But, nevertheless, this is quite a reasonable alternative if the same Docker is used only as a means of packaging, and not virtualization.
As a result, we now have in a reasonable combination both deb-packages and docker-containers, which, perhaps, in some cases, we will replace with snap-packages.