📜 ⬆️ ⬇️

Our experience with Kubernetes in small projects (review and video report)

Dmitry Stolyarov (Fant) with a report on Kubernetes at RootConf, RIT ++ 2017

On June 6, at the RootConf 2017 conference, which was held as part of the Russian Internet Technologies Festival (RIT ++ 2017), a report titled “Our Experience with Kubernetes in Small Projects” was presented in the section “Continuous Deployment and Deploy”. It described the device, the principles of operation and the main features of Kubernetes, as well as our practice of using this system in small projects.

By tradition, we are pleased to present a video with the report (about an hour, much more informative than the article) and the main squeeze in text form.
')

Prehistory


Modern infrastructure (for web applications) has come a long way of evolving from a backend from a DBMS on one server to a significant increase in services used, their separation across virtual machines / servers, switching to cloud solutions with load balancing and horizontal scalability ... and to microservices.

With the operation of modern, microservice , infrastructure, there are a number of difficulties caused by the architecture itself and the number of its components. We distinguish the following from them:

  1. collecting logs;
  2. metrics collection;
  3. supervision (check the status of services and restart them in case of problems);
  4. service discovery (automatic discovery of services);
  5. Automation of updating configurations of infrastructure components (when adding / deleting new service entities);
  6. scaling;
  7. CI / CD (Continuous Integration and Continuous Delivery);
  8. vendor lock-in (this is about dependence on the chosen “solution provider”: the cloud provider, bare metal ...).

As it is easy to guess from the title of the report, the Kubernetes system appeared as an answer to these needs.

Kubernetes Basics


The Kubernetes architecture as a whole looks like a master (maybe not one) and many nodes (up to 5000), each of which has:


On master are:


In addition to all this, there is a kubectl management utility and configurations described in YAML (declarative DSL) format.



In terms of use, Kubernetes offers a cloud that combines all these master and nodes and allows you to run the "building blocks" of the infrastructure. These primitives, including, include:


Examples of the description of the presentation and ReplicaSet in YAML format:

apiVersion: v1 kind: Pod metadata: name: manual-bash spec: containers: - name: bash image: ubuntu:16.04 command: bash args: [-c, "while true; do sleep 1; date; done"] 

 apiVersion: extensions/v1beta1 kind: ReplicaSet metadata: name: backend spec: replicas: 3 selector: matchLabels: tier: backend template: metadata: labels: tier: backend spec: containers: - name: fpm image: myregistry.local/backend:0.15.7 command: php-fpm 

These primitives respond to all of the above calls with a few exceptions: automating configuration updates does not solve the problem of building Docker images, ordering new servers and installing nodes on them, while in CI / CD there is still a need for preparatory work (installing CI, describing assembly rules Docker images, rolling out YAML configurations in Kubernetes).

Our experience: architecture and CI / CD


By small projects we mean small (up to 50 knots, up to 1500 pods) and medium ones (up to 500 knots, up to 15000 pods). We make the smallest projects on bare metal with three hypervisors, which look like this:



The Ingress controller is kube-front-X on three virtual machines ( kube-front-X ):


(Instead of the Pacemaker specified in the diagram, there may be VRRP, ucarp or another technology - it depends on the specific data center.)

How the Continuous Delivery Chain Looks:



Explanations:




In the case of small projects, the infrastructure looks like a container cloud (its implementation is secondary — it depends on hardware and needs) with configured storage (Ceph, AWS, GCE ...) and the Ingress controller, and (in addition to this cloud) additional virtual machines may be available to start services that we do not put inside Kubernetes:



Conclusion


From our point of view, Kubernetes has matured in order to use it in projects of any size. Moreover, this system provides an excellent opportunity from the very beginning to make a project very simple, reliable, with fault tolerance and horizontal scaling. The main underwater stone is the human factor: for a small team it is difficult to find a specialist who will solve all the necessary tasks (requires extensive technological knowledge), or he will be too expensive (and soon he will become bored).

Video and slides


Video from the performance (about an hour) published on YouTube .

Presentation of the report:



Continuation


Having received the first feedback on this report, we decided to prepare a special cycle of introductory articles on Kubernetes , focused on developers and telling in more detail about the structure of this system. Let's start in the coming weeks - stay tuned to our blog!

PS


Read also in our blog about CI / CD and not only:

Source: https://habr.com/ru/post/331188/


All Articles