On the eve of
DevOpsConf, Vitaly Khabarov interviewed
Dmitry Stolyarov (
distol ), technical director and co-founder of the company “Flant”. Vitaly asked Dmitry about what “Flant” is doing, about Kubernetes, ecosystem development, support. We discussed why Kubernetes is needed and whether it is needed at all. And about microservices, Amazon AWS, the “I’m Lucky” approach to DevOps, the future of Kubernetes itself, why, when and how it will take over the world, DevOps perspectives and what engineers need to prepare in a bright and near future with simplification and neural networks.
Listen to the original podcast
interview at DevOps Deflope, a Russian-language podcast about DevOps, and below is the text version.

')
Here and further questions are asked by
Vitaly Khabarov, an engineer from Express42.
About "Flant"
- Hello Dima. You are the technical director of " Flant " and also its founder. Please tell us what the company does and you are in it?
Dmitry : From the outside it seems as if we are the kind of guys who walk, put Kubernetes to everyone and do something with it. But it is not. We started as a company that deals with Linux, but for a very long time our main activity is production and turnkey highload projects. Usually we build the entire infrastructure from scratch and then we are responsible for it for a long, long time. Therefore, the main work that “Flant” performs, for which it receives money, is the
taking of responsibility and the realization of turnkey production .
As a technical director and one of the founders of the company, I work around the clock to come up with how to increase the availability of production, simplify its operation, make life easier for administrators, and make the life of developers more pleasant.
About Kubernetes
- Recently I see many reports and articles about Kubernetes from “Flanta”. How did you come to him?Dmitry : I have already talked about this many times, but I don’t feel sorry to repeat it at all. I believe that it is correct to repeat this topic, because there is confusion between cause and effect.
We really needed a tool. We faced a bunch of problems, struggled, overcame them with different crutches and felt the need for a tool. Went through many different options, built their bikes, accumulated experience. Gradually, they got to the point that they started using Docker almost as soon as it appeared - approximately in 2013. At the time of its appearance, we already had a lot of experience with containers, we already wrote an analogue of “Docker” - some of our crutches in Python. With the advent of Docker, it became possible to throw out crutches and use a reliable and community-supported solution.
With Kubernetes, the story is similar. By the time he began to gain momentum, for us this is version 1.2, we already had a bunch of crutches for both Shell and Chef, which we somehow tried to orchestrate Docker. We seriously looked in the direction of Rancher and various other solutions, but then Kubernetes appeared in which everything was implemented exactly as we would have done or even better. There is nothing to complain about.
Yes, there is some flaw, there is some flaw - there are a lot of flaws, and 1.2 is horrible at all, but .... Kubernetes is like a building under construction - you look at the project and you understand that it will be cool. If the building now has a foundation and two floors, then you understand that it is better not to be populated yet, but there are no such problems with the software - you can already use it.
We did not have a moment that we thought to use Kubernetes or not. We waited for him long before he appeared, and tried to fence off analogues.
About Kubernetes
- Do you participate directly in the development of Kubernetes itself?Dmitry : Mediocre. Rather, we are involved in the development of the ecosystem. We send a certain number of pull requests: to Prometheus, to all sorts of operators, to Helm - to the ecosystem. Unfortunately, I am not able to follow everything we do and I can be wrong, but there is not a single pool from us in the core.
- At the same time you develop and a lot of their tools around Kubernetes?Dmitry : The strategy is this: we are going and pull-requesting into everything that already exists. If there pull requests are not accepted, we simply fork them ourselves and live until they are accepted with our builds. Then, when it reaches the upstream, we go back to the upstream version.
For example, we have a Prometheus-operator, with whom we switched back and forth to the upstream of our assembly already 5 times, probably. We need some kind of feature, we sent a pull request, we need to roll it out tomorrow, but we don’t want to wait until it is released in the upstream. Accordingly, we collect ourselves, we roll our assembly with our features, which for some reason we need, to all our clusters. Then, for example, in the upstream we are wrapped up with the words: "Guys, let's do it for a more general case," we, or someone else, finish it up, and eventually merge back again.
Everything that exists, we try to develop . Many elements that are not yet have not yet been invented or invented, but did not have time to realize - we are doing. And not because we like the process itself or cycling as an industry, but simply because we need this tool. Often they ask the question, why did we do this or that thing? The answer is simple - yes, because we had to go further, solve some practical problem, and we solved it with this tool.
The path is always the same: we are looking very carefully and, if we don’t find any solution how to make a trolley bus from a loaf of bread, then we make our loaf and our trolley bus.
Tools "Flanta"
- I know that Flanta now has addon-operators, shell-operators, dapp / werf tools. As I understand it, this is the same tool in different incarnations. I also understand that inside the "Flant" there are still many different tools. This is true?Dmitry : We still have a lot on GitHub. From what I’ll remember now, we have a statusmap - a panel for Grafana, which has gone in for everyone. She is mentioned almost in every second article about monitoring Kubernetes on Medium. It is impossible to briefly tell you what a statusmap is - a separate article is needed, but this is a very useful thing for monitoring status over time, since in Kubernetes we often need to show status over time. We also have a LogHouse - this is a thing based on ClickHouse and black magic for collecting logs in Kubernetes.
Many utilities! And there will be even more, because a certain amount of internal decisions will be released this year. From the very large addon-based database there are a bunch of addons to Kubernetes ala how to put the sert manager correctly - a tool for managing certificates, how to put Prometheus with a bunch of attachments correctly - these are about twenty different binaries that export data and collect something, to this Prometheus posh graphics and alerts. All this is just a bunch of addons to Kubernetes, which are put in a cluster, and it turns from simple to cool, sophisticated, automatic, in which many issues have already been resolved. Yes, we do a lot.
Ecosystem development
- I think this is a very big contribution to the development of this tool and its methods of use. Can you think about who else would make the same contribution to the development of the ecosystem?Dmitry :
In Russia, of those companies that operate in our market - no one is close . Of course, this is a loud statement, because there are large players, like Mail with Yandex - they also do something with Kubernetes, but even they did not get close to the contribution of companies around the world that are doing much more than we do. It is difficult to compare the "Flant" with a staff of 80 people and Red Hat, in which there are 300 engineers for one Kubernetes only, if I am not mistaken. It's hard to compare. We have 6 people in the RnD department, including me, who saw all our bodies. 6 people against 300 Red Hat engineers - it's somehow difficult to compare.
- Nevertheless, when even these 6 people can do something really useful and alienable, when they are faced with a practical task and give a decision to the community - an interesting case. I understand that in large technology companies where there is a development and the support team Kubernetes, in principle, the same types can be developed. This is an example for them that can be developed and given to the community, give impetus to the whole community that uses Kubernetes.Dmitry : Probably, this is an integrator chip, its feature. We have a lot of projects and we see a lot of different situations. For us, the main way to create added value is to analyze these cases, find a common one and make it as cheap as possible for us. We are actively engaged in this. It's hard for me to talk about Russia and the world, but we have about 40 DevOps engineers in a company that deals with Kubernetes. I do not think that there are many companies in Russia with a comparable number of specialists who understand Kubernetes, if they exist at all.
I understand everything about the title of the post DevOps-engineer, everyone understands everything and is used to calling DevOps engineers DevOps-engineers, we will not discuss this. All these 40 great DevOps engineers face problems every day and solve them, we just analyze this experience and try to summarize it. We understand that if he stays with us inside, then after a year or two the tool is useless, because somewhere in the community there will be a ready-made tool. It makes no sense to accumulate this experience inside - it's just a waste of time and effort into dev / null. And so we do not mind at all. We are happy to publish everything and understand that it is necessary to publish, develop, promote, promote, so that people use and add their experience - then everything grows and lives. Then after two years the tool does not go in the trash. It is not a pity to continue to pour in power, because it is clear that someone uses your tool, and after two years everyone uses it.
This is part of our big strategy with dapp / werf . I do not remember when we started doing it, it seems, about 3 years ago. Initially, it was generally on the shell. It was a super proof of concept, we solved some of our particular tasks - it worked out! But there are problems with the shell, it is impossible to increase this further, to program in the shell is something else. We had a habit of writing in Ruby, respectively, in Ruby we remade, develop, develop, develop, and rested on the fact that the community, the crowd that does not say "we want or do not want," turns Ruby's nose, as it is not funny. We realized that we should write all this stuff on Go, in order to simply correspond to the first item in the checklist:
DevOps should be a static binary . On Go or not on Go is not so important, but the static binary written on Go is better.
Spent the strength, rewrote the dapp on Go and called it werf. Dapp is no longer supported, does not develop, works in some latest version, but there is an absolute upgrade-path to the top, and it can be followed.
Why was a dapp created?
- Can you briefly tell why the dapp was created, what problems does it solve?Dmitry : The first reason to build. Initially, we had strong build issues when Docker could not do a multi-stage, and we did a multi-stage on our own. Then we had a lot of cleaning questions. All those who do CI / CD, sooner rather than later, face the problem that there are a bunch of collected images, it is necessary to somehow clean out what is not needed and leave what is needed.
The second reason is slow. Yes, there is Helm, but it solves only part of the problems. No matter how ridiculous, it is written that "Helm is the Package Manager for Kubernetes." It is that “the”. Still there is the words “Package Manager” - what is the usual waiting from the Package Manager? We say: “Package Manager - put the package!” And we expect it to tell us: “Package is delivered”.
Interestingly, we say: “Helm, put the package,” and when he replies that he has installed, it turns out that he had just started the installation — he pointed out to Kubernetes: “Start this thing!” And whether it started up or not, whether it works or not Helm this question does not solve at all.
It turns out that Helm is just a text preprocessor that loads data into Kubernetes.
But we, within the framework of any deployment, want to know - did the application roll out on the prod or not? Rolled out on the prod means that the application has left there, the new version has unfolded, it has unfolded, and at least it does not fall there and correctly responds. Helm does not solve this problem. To solve it, you need to spend a lot of effort, because you need to give Kubernetes a command to roll out and monitor what is happening there - whether it has developed, whether it has rolled out. And then there are a lot of tasks associated with deploem, with cleaning, with the assembly.
Plans
Even this year we will go to local development. We want to come to what used to be in Vagrant - scored "vagrant up" and we turned around virtualka. We want to come to such a state that there is a project in Git, we write “werf up” there, and it raises a local copy of this project, deployed in a local mini-Kub, with all the directories convenient for development connected. Depending on the development language, this is done differently, but, nevertheless, so that you can comfortably conduct local development under the mounted files.
The next step for us is to
invest heavily in convenience for developers . In order to quickly deploy a project locally with one tool, put it down, push into Git, and it will roll out to the stage or to the tests, depending on the pipelines, and then go to the same tool with the same tool. This unity, unification, reproducibility of the infrastructure from the local environment to the sale is very important for us. But this is not yet in werf - just planning to do it.
But the path to dapp / werf was always the same as with Kubernetes at the beginning. We faced problems, solved their workarounds - we made up for ourselves some solutions on the shell, on anything. Then these workarounds tried to somehow straighten, summarize and consolidate into binaries in this case, which we simply share.
There is another look at the whole story, with analogies.
Kubernetes is a car frame with an engine. There are no doors, glass, radio, Christmas tree - nothing at all. Only the frame and engine. And there is a Helm - this is the steering wheel. Cool - there is a steering wheel, but we still need a steering pin, steering rack, gearbox and wheels, and without them in any way.
In the case of werf, this is another component to Kubernetes. Only now in our alpha-version of werf, for example, Helm is compiled in general inside werf, because we are tired of doing it ourselves. There are many reasons to do so, in detail about why we compiled helm entirely together with the tiller inside werf, I will tell you exactly
on the report on RHS ++ .
Now werf is a more integrated component. We have a ready-made steering wheel, steering pin - I am not good at cars, but this is a big block that solves a fairly large range of tasks. We do not need to climb the catalog ourselves, pick up one detail to another, think about how to tie them together. We receive the ready combine which solves at once the big pack of tasks. But inside it is arranged all of the same open source components, it also uses Docker for building, Helm for part of the functionality, and there are several other libraries. This is an integrated tool to get fast and conveniently cool CI / CD out of the box.
Is it hard to maintain Kubernetes?
- You tell about the experience that you started to use Kubernetes, this is for you a frame, an engine, and that you can hang a lot of different things on it: the case, the steering wheel, fasten the pedals, the seats. The question arises - how difficult is the support of Kubernetes given to you? You have a wealth of experience, how much time and resources does it take to support Kubernetes apart from everything else?Dmitry : This is a very difficult question and in order to answer, you need to understand what support is and what we want from Kubernetes. Maybe you will open?
- As far as I know, and as I see, many teams now want to try Kubernetes. Everyone is harnessed into it, put on his knee. I have a feeling that people do not always understand the complexity of this system.Dmitry : Everything is so.
- How difficult is it to take and put Kubernetes with nothing for it to be production ready?Dmitry : What do you think, how difficult is it to transplant a heart? I understand, the question is compromising. To carry a scalpel and not make a mistake - it is not so difficult. If you are told where to cut and where to sew, then the procedure itself is simple. It is difficult to guarantee from time to time that everything will work out.
Put Kubernetes and make it work simply: chick! - set, there are a lot of ways to install. But what will happen when problems arise?
There are always questions - what have we not considered yet? What have we not done yet? What are the parameters of the Linux kernel indicated incorrectly? Lord, did we even point them out ?! What components of Kubernetes have we supplied and which are not? Thousands of questions arise, and to answer them you need to stew in this industry for 15-20 years.
I have a fresh example on this topic, which can reveal the meaning of the problem “Is it difficult to support Kubernetes?”. Some time ago we seriously considered whether we should try to introduce Cilium as a network in Kubernetes.
I will explain what is Cilium. Kubernetes has many different implementations of the network subsystem, and one of them is very cool - this is Cilium. What is its meaning? In the kernel, some time ago, it became possible to write hooks for the kernel, which somehow invade the network subsystem and various other subsystems, and make it possible to bypass large chunks in the kernel.
In the Linux kernel, historically there is ip rout, an over filter, bridges and many different old components that are 15, 20 and 30 years old. In general, they work, everything is cool, but now they have covered the containers, and it looks like a tower of 15 bricks on top of each other, and you stand on it on one leg - a strange feeling. This system has historically developed with many nuances, like an appendix in the body. In some situations there are problems with performance, for example.
There is a wonderful BPF and the ability to write kernel hooks - the guys wrote their kernel hooks. The packet comes to the Linux kernel, they take it out right at the entrance, process it themselves as it should without bridges, without TCP, without an IP stack — in short, bypassing everything that is written in the Linux kernel, and then spit out into the container.
What happened? Very cool performance, cool features - just great! But we look at it and see that there is a program on each machine that connects to the Kubernetes API and, according to the data it receives from the API, generates the C code and the compiler of the binaries that it loads into the kernel, so that in the kernel space these hooks work .
What happens if something goes wrong? We do not know. To understand this, you need to read all this code, understand all the logic, but it's awesome how difficult it is. But, on the other hand, there are these bridges, net-filters, ip rout - I did not read their sources, and 40 engineers who work in our company, too. Maybe some pieces understand the units.
And what's the difference? It turns out that there is ip rout, the Linux kernel, and there is a new tool - what difference does it make, we don’t understand either one or the other. But we are afraid to use new - why? Because if the instrument is 30 years old, then in 30 years all the bugs have been found, all the rakes have come and you don’t need to know everything - it works like a black box and it always works. Everyone knows which diagnostic screwdriver in which place to plug in, what tcpdump at which point to run. Everyone knows diagnostic tools well and understands how this set of components works in the Linux kernel — not how it works, but how to use it.
And awesome cool Cilium is not 30 years old, it is not yet maintained. Kubernetes has the same copy problem. That Cilium is placed beautifully, that Kubernetes is placed beautifully, but when something goes wrong in sales, can you in a critical situation quickly understand what went wrong?
When we say if it’s hard to maintain Kubernetes, no, it’s very simple, and yes, it’s incredibly difficult. Kubernetes works fine on its own, but with a billion nuances.
About the approach "I'm lucky"
- Are there any companies where these nuances are almost guaranteed to appear? Suppose Yandex suddenly transfers all the services to Kubernetes polling, there will be some kind of load.Dmitry : No, this is not a conversation about the load, but about the simplest things. For example, we have Kubernetes, we have appended there. How to understand that it works? There is simply no ready-made tool to understand that the application is not falling. There is no ready system that sends alerts — you need to configure these alerts and each schedule. And here we are updating Kubernetes.
There is Ubuntu 16.04. We can say that this is the old version, but we are still on it, because there is LTS. There is a systemd, the nuance of which is that it does not clean the C-groups. Kubernetes launches sweats, creates C-groups, then sweeps deletes, and somehow it turns out - I don’t remember the details, I'm sorry - there are systemd slices left. This leads to the fact that over time, any machine begins to slow down strongly. This is not a question about highload. If you start running all the time, for example, if there is a Cron Job that constantly generates it, then the machine with Ubuntu 16.04 starts to slow down in a week. There will be a constantly high load average due to the creation of a bunch of C-groups. This is a problem that anyone who simply puts Ubuntu 16 and on top Kubernetes faces.
Suppose it somehow updates systemd or something else, but in the Linux kernel up to 4.16 it is even funnier — if you delete C-groups, they leak in the kernel and are not actually deleted. Therefore, after a month of work on this machine, it will be impossible to see the statistics on the memory by file. We take out a file, we ride in the prog, and one file rolls for 15 seconds, because the kernel has been counting within a million C-groups for a very long time, which seem to be deleted, but not - they are leaking.
There are still a lot of such trifles here and there. It is not a question that giant companies can sometimes encounter with very large loads - no, it is a matter of everyday things. People can live like this for months - they have set up Kubernetes, they have closed the application - it seems to be working. So much so normal. The fact that once this application somehow falls, they will not even know, the alert will not come, but for them this is the norm. We used to live on virtual computers without monitoring, now we moved to Kubernetes without monitoring too - what's the difference?
The question is that when we walk on the ice, we never know its thickness, if not measured in advance. Many go and do not soar because they used to go before.
From my point of view, the nuance and complexity of operating any system is to ensure that the thickness of the ice is exactly enough to solve our problems. Speech about it.
In IT, it seems to me that there are too many “I’m feeling lucky” approaches. Many people install software, use software libraries in the hope that they will be lucky. In general, a lot of luck. Perhaps that is why it works.
- From my pessimistic assessment, it looks like this: when the risks are great, and the application should work, support from Flants, possibly from Red Hat, is needed, or its own internal team, dedicated to Kubernetes, who is ready to pull it, is required.Dmitry : Objectively it is. Getting a small team into Kubernetes alone is a number of risks.
Do we need containers?
- Can you tell us how much Kubernetes is common in Russia?Dmitry : I do not have this data, and I’m not sure that anyone has any data at all. We say: “Kubernetes, Kubernetes”, but there is another view on this question. I don’t know how common the containers are, but I know the number from reports on the Internet that 70% of containers are orchestrated by Kubernetes. It was a reliable source for a fairly large sample of the world.
Then another question - do we need containers? I have a personal feeling and, in general, the position of the “Flant” company is such that Kubernetes is a de facto standard.
Nothing but Kubernetes will not.
This is an absolute game-changer in infrastructure management. Just absolute - everything, no more Ansible, Chef, virtual machines, Terraform. I'm not talking about the old collective farm methods.
Kubernetes is an absolute changer , and now will be the only way.
It is clear that someone needs a couple of years, and someone a couple of dozen to realize this. I have no doubt that there will be nothing but Kubernetes and this new look: we no longer hurt the OS, but use the
infrastructure as code , not only with the code, but with yml - the declaratively described infrastructure. I have a feeling that it will always be like this.
- That is, those companies that have not yet switched to Kubernetes will surely switch to it or remain in oblivion. I understood you correctly?Dmitry : This is also not entirely true. For example, if we have a task to run a dns server, then it can be run on FreeBSD 4.10 and it can work fine for 20 years. Just work and everything. Maybe in 20 years you will need to update something once. If we are talking about software in the format that we launched and it really has been working for many years without any updates, without making changes, then, of course, there will not be Kubernetes. He is not needed there.
All that concerns CI / CD - wherever you need Continuous Delivery, where you need to update versions, to maintain active changes, wherever you need to build fault tolerance - only Kubernetes.
About microservices
- Here I have a little dissonance. To work with Kubernetes, external or internal support is needed - this is the first point. The second is that when we are just starting development, we are a small start-up, we still have nothing, development under Kubernetes or even under microservice architecture can be difficult, and is not always justified with economics. I am interested in your opinion - do startups need to immediately start writing from Kubernetes from scratch, or can you still write a monolith, and then only come to Kubernetes?Dmitry : A cool question. I have a report about microservices
"Microservices: size does matter." Many times I have come across the fact that people are trying to hammer nails with a microscope. The approach itself is correct, we ourselves design the internal software in this way. But when you do it, you need to clearly understand what you are doing. Most of all in microservices I hate the word “micro”. Historically, this word appeared there, and for some reason people think that micro is very small, less than a millimeter, like a micrometer. This is not true.
For example, there is a monolith, which is written by 300 people, and all those who participated in the development, understand that there are problems, and it should be broken into micro-pieces - 10 pieces, each of which is written by 30 people in the minimum version. This is important, necessary and cool. But when a startup comes to us, where 3 very cool and talented boys wrote 60 microservices on my knee, every time I look for a Corvalol.
It seems to me that thousands of times have already been told about this - they got a distributed monolith in one or another incarnation. It is not economically justified, it is very difficult at all in all. I just saw it so many times that it hurt me right, so I continue to talk about it.
To the initial question, that there is a conflict between the fact that, on the one hand, Kubernetes is terrible to use, because it is not clear that it can break or not make money, on the other hand, it is clear that everything goes there and nothing but Kubernetes . The answer is to
weigh the amount of benefit that comes in, the volume of tasks that you can solve . This is on the one hand scales. On the other hand, risks that are associated with downtime or with a decrease in response time, level of availability - with a decrease in performance.
Here it is - either we move fast, and Kubernetes allows us to perform many things much faster and better, or use reliable, time-tested solutions, but move much slower. Each company must make this choice. You can think of it as a jungle walkway - when you walk for the first time, you can meet a snake, a tiger or a mad badger, and when you went 10 times - you trod a path, removed branches and go easy. Each time the path is wider. Then it is an asphalt road, and later a beautiful boulevard.
Kubernetes does not stand still. Again the question: Kubernetes, on the one hand, is 4-5 binaries, on the other - this is the whole ecosystem. This is the operating system that we have on the machines. What is it? Ubuntu or Curios? This is the Linux kernel, a bunch of additional components. All of these things here one poisonous snake thrown out of the road, they put a fence there. Kubernetes is developing rapidly and dynamically, and the amount of risk, the amount of unexplored decreases with each passing month and, accordingly, these scales are rebalanced.
Answering the question what a startup should do, I would say - come to “Flante”, pay 150 thousand rubles and get a turnkey DevOps easy service. If you are a small startup in several developers - it works. Instead of hiring your DevOps, who will need to learn to solve your problems and pay salaries at this time, you will receive a solution to all turnkey issues. Yes, there are some cons. , , . , . , Kubernetes.
, 10 , . .
Amazon Google
— Amazon Google?: , , . . , . , Amazon AWS: Load Balancer , «, , Load Balancer!» .
, , . 40 , , , 60 — . - , , .
, — , hosted- - . , , . Amazon Google . — . . clouds, , — Ager, , , OpenStack : Headster, Overage — , . , .
, — , , , hosted- .
Kubernetes?
— - Kubernetes? Kubernetes, «», Kubernetes?: , Kubernetes : «, , Kubernetes, !». : «, Kubernetes, , ». , CI/CD , . , , .
, , , — ! — Kubernetes . . , , — Kubernetes , ! — ! — , ! — 100% uptime, 50 , . , !
, : «, ». , . , . CI/CD, . , .
, Kubernetes — Kubernetes .
, Kubernetes. , , , . , . Kubernetes — , , « », « », - .
. : , , — . . , , , , . , .
, - Kubernetes — .
Kubernetes , , , . — . - , — . - , , , , . . , DevOps , - , - . - .
. : «, !» — , : « , ». . , , - , , .
: Kubernetes. .
, :
: / . , - IT-, , - — soft for easing the world, , . Kubernetes , .
serverless
— , , , , , serverless. - , , Kubernetes ?: , , — ! . , , . , ? - CPU .
serverless , , , . , , . , , , .
serverless , , — , . ? — , . , , . , , , .
serverless : , 2019 . 2030 — . , , ( ), . . , , , serverless — … . . 2019 serverless .
Kubernetes
— , , Kubernetes ?: . — statefull — - stateless . Kubernetes , . Stateless Kubernetes, . statefull , , . , . , , . , .
, statefull — — , , stateless-. , - - . Statefull — , , , , — adoption.
, , - , . . — , , , easy service: MySQL easy service, RabbitMQ easy service, Memcache easy service, — , , . , , , Kubernetes, .
.
— , , .
- 80- YouTube Saturday Night Live — , . . , , . . , 15 , , . .
2019 ? , . 100 , . , , 1980 , , , . — .
, —
. 1980 , , .
, Kubernetes . , -, — .
?
— , , Kubernetes?: 1? . — . , . 10 , .
, , , DevOps' . . , , serverless, Kubernetes , , , — .
- . , . 10 , , . . , . , , .
. : . — , . , — - .
DevOps — .
— , .: , ! , , . , , . . DevOps . . -, , , , , 1,5, , 2. , DevOps', 3 3.
, . — , , .
, — . — , , , , — . DevOps .
— , , , , , , .: . 2019 :
lifetime learning — . , , — . . , .
180 . , - , - — . ! — -. . , , , — , - . . job security , , - .
- Will you have any wish?Dmitry : Yes, I have a few wishes.
The first and mercantile - subscribe to
YouTube . Dear readers, go to YouTube and subscribe to our channel. Somewhere in a month we will begin an active expansion to the video service, we will have a lot of educational content about Kubernetes, open and different: from practical things, right up to laboratories, to deep fundamental theoretical things and how to apply Kubernetes at the level of principles and patterns.
The second mercantile wish is to go to
GitHub and put asterisks because we eat them. If you do not put stars on us, we will have nothing to eat. It's like mana in a computer game. We do something, we do, we try, someone says that these are terrible bikes, someone that everything is wrong at all, and we continue and act absolutely honestly. We see the problem, we solve it and share our experience. Therefore, give us a star, it will not lose from you, but will come to us, because we eat them.
The third, important, and no longer mercantile wish -
stop believing in fairy tales . You are professionals. DevOps is a very serious and responsible profession. Stop playing in the workplace. Let you click and you will understand. Imagine that you will come to the hospital, and there the doctor is experimenting with you. I understand that this may be offensive to someone, but, most likely, this is not about you, but about someone else. Tell others to stop too. It really spoils the life of all of us - many are beginning to relate to exploitation, to admins and to DevOps, like dudes who again broke something. This is “broken” most often due to the fact that we went to play, and we didn’t see with cold mind that this is the case, and here it is.
This does not mean that you should not experiment. It is necessary to experiment, we do it ourselves. To be honest, we ourselves also sometimes play - this is, of course, very bad, but nothing human is alien to us. Let's declare the year 2019 as a year of serious, thoughtful experiments, and not games for sale. Probably so.
- Thank you very much!Dmitry : Thank you, Vitaly, for the time and for the interview. Dear readers, thank you very much if you suddenly come to this point. I hope that at least a couple of thoughts we brought you.
In an interview, Dmitry touched upon the question of werf. Now it is a universal Swiss knife, which solves almost all problems. But it was not always so. At DevOpsConf at the RIT ++ festival Dmitry Stolyarov will talk about this tool in detail. In the report “werf is our tool for CI / CD in Kubernetes” there will be everything: problems and hidden nuances of Kubernetes, solutions for these difficulties and the current implementation of werf in detail. Join us on May 27 and 28, we will create the perfect tools.