The RIT 2018 festival in Skolkovo was large and very diverse. Mobile development, backend, frontend, DevOps, project management and even psychology are topics for every taste and tight schedule from morning to evening. Themes are divided into individual tracks, the tracks are tied to the halls. If you are interested only in specialized reports, you can settle in the right room. The hall for keynoats, however, was used according to the needs of speakers of various topics.
By and large, I was educated by DevOps knowledge, and later, sharing with the colleagues my impressions of the conference, I formed a short list of the reports I remembered. A few months have passed, and I still remember well what they were talking about.
So, 3 technical reports that I memorized at RIT 2018.
The monitoring tools used now do not support microservice architecture applications so well. The more dynamics in the system, the more difficult it is to set up monitoring for it. Convenient monitoring for cluster systems like Kubernetes, driving the dynamics to extremism, is generally a non-trivial task. Why is that? Dmitry Stolyarov, technical director of the company Flant, talks about the reasons for this complexity and its impact on the main monitoring mission.
Traditional monitoring systems expect to work with static servers, which are relatively rarely added and removed from the application infrastructure. In Kubernetes, the creation and deletion of basements and service applications occurs every second, so the existing automatic detection procedures simply do not cope with this volume.
The number of environments themselves is also calculated in tens and hundreds. Accordingly, the volume of transmitted telemetry increases by the same amount. And it still needs to be stored somewhere.
A separate problem is the collision of the physical and virtual worlds: the consumption of resources by applications in Kubernetes is rather ephemeral and is reflected in terms of pod constraints. But resource consumption sweepstakes already has a specific physical effect on the available server capacity. When looking at charts, you always have to consider how you look at resources. In practice, few people are interested in individual trades. Of interest is the resource consumption of the application as a whole, and this already requires flexible grouping of telemetry pods according to some criteria defined by users.
And we need to increase the resulting scheme several times for ubiquitous dev / staging / prod environments!
The report is recommended to all who have to support the cluster kubernetes.
It was extremely interesting for us to listen to the report of Maxim Lapshin, in which he shared his rare experience of using devops-practices in the development of boxed products. A boxed product is such traditional software that is installed and runs on user power.
Erlyvideo is developing a video streaming server, we are an Internet services configuration server. Our problems are in many ways similar to those that caused Erlyvideo's DevOps transformation.
Maxim begins the report by answering the most important question: "What is all this for?". All the same factors that drive the introduction of devops culture in the development of services are also present in industries where there is a more traditional development. And the influence of these factors in boxed products will probably be more dramatic than when working on the service. For example, the fewer releases, the greater the amount of change that will be delayed. If you roll out a new service, you can verify or simply convince yourself that it is safe. But if you release a product distribution with a lot of changes, you will not have to convince yourself of the security of the update, you will also have to convince your users of its safety. Small, but frequent portions of change come to the rescue here. And this is just one of the problems.
The report delves into this and many other reasons for the use of continuous delivery, drawing parallels and emphasizing the differences from the generally simpler work in CI / CD mode with services.
How is this possible? In the report, Maxim describes a set of practices used in Erlyvideo to make the uninterrupted delivery of a stream of changes real. Many approaches will be useful as is, something will require adaptation to the realities of our work. But, in any case, this wonderful success story can inspire you to rethink your problems and search for solutions in a variety of DevOps practices.
The report will be very interesting to see everyone who is working on product distribution kits.
Ubiquitous quick start guide, crash course and tutorials "How to start working with kubernetes" make it relatively easy to jump into this car, deploy a cluster and close the application. Given the incredible popularity of this topic, many do. But one should not forget that, in essence, Kubernetes is a rather complicated system whose maintenance requires specific knowledge. In Ingram Micro Cloud, this knowledge was required when another application suddenly became unavailable over the network. With the investigation of this incident, Alexander Hayorov's fascinating journey through the network subsystem began.
The report introduces us to the gradually complicating elements of the Kubernetes network stack, explaining how large and complex routing is made up of elementary blocks. It is especially interesting when Alexander talks about why it was done this way and not otherwise, simulating hypothetical other options for implementation.
This is really the Kubernetes alphabet that most users will have to face. I myself asked the questions "why nodePort?" and "why I do not see the IP of my service on the interface?"
Interesting and informative.
Source: https://habr.com/ru/post/420471/
All Articles