
Since February,
Love Kubernetes has passed, it seems to us, forever. The fact that we managed to enter the Cloud Native Computing Foundation, certify our Kubernetes distribution kit by Certified Kubernetes Conformance Program, and also launch our Kubernetes Cluster Autoscaler in
Mail.ru service
Mail.ru Cloud Containers slightly brightened the separation.
It's time for the third @Kubernetes Meetup! In short:
')
- Gazprombank will tell how they use Kubernetes in their R & D to manage OpenStack;
- Mail.ru Cloud Solutions - how to scale applications in K8S with the help of scalers and how Kubernetes Cluster Autoscaler prepared their implementation;
- and the Wunderman Thompson agency — how Kubernetes helps them optimize their approach to development and why DevOps has more Dev than Ops.
The meeting will be held on June 21 (Friday) at 18:30 in the Moscow office of Mail.ru Group (Leningradsky Prospect, 39, p. 79).
Registration is compulsory and closes on June 20 at 11:59 am (or earlier if the seats run out).
“Kubernetes for developers: how many Devs are in DevOps?”
Gregory Nikonov, Wunderman Thompson, Managing Director
We do not have clusters of 500 nodes. We do not have a harsh DevOps. We have no dedicated product teams. But we have a lot of interesting projects and answers to the questions that we found, developing and supporting these projects. First of all, we are developers, and we are used to creating tools that we will use later. Perhaps they will help you in your work.
The agency Wunderman Thompson is one of the pioneers of developing Internet solutions in Russia, and now they are developing both simple landing pages and complex distributed systems. Kubernetes helps to optimize the approach to development, and to the agency’s customers, the hosting and operation of the solutions created.
In distributed systems with a large number of integrations and internal components, the microservice architecture is a natural answer to the requirements for upgradeability and maintainability of the solution; however, the transition to such an architecture gives rise to a whole series of problems related to versioning and publishing. The fact that we are an agency, and not a dedicated product team, and our developers do not constantly keep the detailed context of a specific solution on their machines, imposes its own requirements on the reproducibility of the development environment, the ability to make changes to several teams at the same time and return to the project after some time . The answer to these challenges was the processes and tools that we developed and which allow our developers and DevOps to more easily develop and maintain the solutions they create.
You will learn why DevOps is more Dev than Ops, and how laziness reduces the time and cost of development / support, as well as:
- how Kubernetes changed our approach to project development;
- what the life cycle of our code looks like;
- what tools we use for controlled publication of microservices;
- how we solve the problem of building obsolete artifacts;
- as we deploy to the cluster with pleasure.
“Scaling applications with Kubernetes Cluster Autoscaler: the nuances of the Autoscaler and the implementation of Mail.ru Cloud Solutions”
Alexander Chadin, Mail.ru Cloud Solutions, developer of PaaS services
In today's world, users expect as a given that your application is always online and always available — which means it can withstand any traffic flow, no matter how big it is. Kubernetes offers a fairly elegant solution that allows you to scale yourself according to the load - Kubernetes Cluster Autoscaler.
In general, Kubernetes has two types of scaling according to what we scale: more copies of the application or more resources. Vertical scaling when we increase the number of application replicas within existing nodes. And more complex horizontal scaling - we increase the number of nodes itself.
In the second case, we will be able to raise even more copies of the application - which will ensure its high availability. We’ll talk about horizontal scaling with Cluster Autoscaler. He can not only increase but also reduce the number of nodes depending on the load. For example, the peak of the load passes - then Autoscaler itself will reduce the number of nodes to the required and thus payment for the provider resources.
At the mitap, we will tell you more about the nuances of the work of Kubernetes Cluster Autoscaler, as well as the difficulties we faced when launching our Cluster Autoscaler implementation as part of Mail.ru Cloud Containers service. You will learn:
- what scalers are in Kubernetes, what is the peculiarity of their use;
- what you should pay attention to when using scalers;
- how we segmented nodes by availability zones using Node Groups;
- how Kubernetes Cluster Autoscaler was implemented in MCS.
“R & D in Gazprombank: how K8S helps manage OpenStack”
Maxim Kletskin, Gazprombank, Product Manager
In a world where a trend is set for everything as a service, above all else is Time-to-Market. You need to quickly develop applications to test hypotheses and find new markets at the time of their initial education. Speed is especially important for banks, and new technologies are helping here - in particular, containerization technologies and Kubernetes.
Maxim Kletskin is the product manager at Gazprombank and is developing a sandbox for launching pilot products. R & D of Gazprombank conduct a variety of studies in its cloud, which is OpenStack. Kubernetes is used here in two ways: 1) Kubernetes on Bare Metal as the management layer of the OpenStack cloud and 2) K8S as an OpenShift distribution for development.
In the report we will talk about the first case and find out how Gazprombank uses Kubernetes to manage OpenStack. If you look at the OpenStack architecture, you can see that it is quite atomic, so using Kubernetes as an OpenStack control layer seems very interesting and logical. In addition, it will make it easier to add nodes to the OpenStack cluster and increase the reliability of Control Plane. And, like a cherry on a cake, it will simplify the collection of telemetry from the cluster.
You will learn:
- why R & D jar: test and experiment;
- how we containerize OpenStack;
- how and why to deploy OpenStack to K8S.
After the performances, we will smoothly switch to the @Ku
beer netes After-Party format, and we have prepared some cool announcements for you. Be sure to register
by the link , we review all applications in a couple of days.
About new events of the @Kubernetes Meetup series and other
Mail.ru Cloud Solutions events we immediately announce in our channel in Telegram:
t.me/k8s_mailWant to perform at the next @Kubernetes Meetup? You can leave the application here:
mcs.mail.ru/speak