📜 ⬆️ ⬇️

Kubernetes developers answer questions from Reddit users



On April 10, AMD (Ask My Anything) was held at Reddit, within which 9 Kubernetes developers from around the world answered Internet users' questions. A total of 326 comments were collected, and we present a translation of some of them - containing answers to the most interesting (in our opinion) questions.

Questions and answers for convenience are divided into conditional categories. So:
')

General technical issues


Question : Are there plans to add network limits to existing restrictions on CPU and RAM? Can we also expect autoscaling based on network resources and without applying custom metrics?

Answer â„–1 : At the moment, work in the scheduler for network bandwidth is not expected. Given the experience of Borg, I doubt that such restrictions will work as users want. A more preferable, in my opinion, way is to add something like QoS ranges, in which traffic with high priority will be preferable, but such an implementation is not yet designed and it will need to take into account the features of a large number of plug-ins supported by Kubernetes.

Clarification : “Network bandwidth” is not among other scalar resources. You can’t just say, “The hearth must have XX throughput”, because the available bandwidth is a property of a particular network path, not a single endpoint. Thus, it must be described as a property for a couple (“This hearth must have XX throughput to another hearth”), which quickly ceases to be described in cases that go beyond several hearths and require deep network integration for implementation. TL; DR: We need to think more creative about network bandwidth.

Answer # 2 : In fact, there are bandwidth-limiting annotations ( kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth ) applicable to plots, but whether the network plugin can use them depends on the individual case (for example , the built-in kubenet plugin and the OpenShift SDN plugin can, and I’m not sure about the others).

Autoscaling without user metrics is hardly possible. We try to support the API receiving non-custom metrics with limited resources that can be expressed in limits and queries, so that the API does not grow for a host of other reasons. However, we also hope to better stabilize the name of some metrics that are currently exported by kubelet , in order to make network scaling using the capabilities of user metrics easier.


Question : What do you think about using completely separate clusters for dev and production environments instead of using namespaces for such a separation? What do you usually meet in real life?

Answer # 1 : Kubernetes was created under the inspiration of Borg, which has a relatively small number of clusters on a large number of machines. I would like to follow this direction. At the same time, in life I meet both of these models (and their various intermediate versions), and there are usually good rationales for them. Upstream cores are not perfect in isolation (help is welcome). Security teams can be meticulous. Multi -tenancy in Kubernetes is in an early stage of development. And so on ... Despite all this, I think that the benefits of large clusters will eventually outweigh all costs.

Answer # 2 : This is a great question. I get it almost every week. Unfortunately, clusters are very rarely identical - too many variables are involved. Therefore, introducing the next cluster, you add new risks. Managing a Kubernetes cluster is not so simple in itself, and managing multiple conflicting clusters is an even more difficult task.

Speaking about what I see in real installations, I think a lot of clusters for test / staging / dev are a fairly common pattern. Ultimately, I agree that you should not run the developed code in the same place as production. Namespaces are wonderful - you will be surprised at how much larger cloud providers use them more than you might think.


Question : What factors affect the maximum supported number of pods per node (now it is 100)?

Answer : Docker scalability, kernel settings, testing, user needs.

Addition : The size of the application, the complexity of the application and how much the application is demanding for different subsystems. I saw a working production from 300-400 pods per node, which ran on medium machines (16-32 cores each) for small applications.

For a long time, the container runtime was a bottleneck - now it is usually the network and iptables, and storage can also be a problem.


Question : Are there any plans to improve the logging infrastructure? Now it seems that this is just some temporary solution ...

Answer : Work is underway to improve the situation and to make it easier to connect third-party solutions for journaling. I can not immediately find the appropriate links, but we follow this.


Question : Which of the K8s features being developed right now excite you the most?

Answer # 1 : For me personally: local volumes (I think this is useful work as much as I hate the need for them), identification (all kinds of), reworking Ingress , generalization of the API machinery. I also improvise on how to develop Services - they are in a mess.

Answer # 2 : There are a lot of them! Watch backlog on features .


Affiliated / Related Projects


Q : Are there good GUI front-ends for container management and orchestration?

Answer # 1 : Kubernetes Dashboard web UI is wonderfully controlled with Kubernetes primitives. He has a “Create” page where you can quickly deploy new Deployments just by filling out a form.

Some distributions (for example, OpenShift and Tectonic) have their own web interfaces. Kubernetic is a desktop GUI for Kubernetes that looks like a Kubernetes Dashboard. There is even a mobile app Cabin ! If you are looking for something more high-level and application-oriented, there is Kubeapps , through which the Helm-charts are installed and managed.

Answer # 2 : Have you seen the Weave Scope ?


Weave Scope Interface


Question : Do you expect kubeadm to become a standard way to create / update clusters (outside hosted installations)?

Answer # 1 : Kubeadm is a widely recognized method for deploying Kubernetes clusters with a large number of available options. Although the Kubernetes community simultaneously supports many solutions for deploying clusters (mainly because there is no single best solution covering all needs), Kubeadm is the right solution that we usually offer for deploying Kubernetes clusters that are ready for production. But again, there are many other solutions for deploying and managing clusters, and each of them has its pros and cons. Kubeadm is just one of them.

Answer number 2 : I put on kubeadm. And I really bet on it every day. We are working to prepare it for a wide audience (GA) as soon as possible, and it will be really cool to finally see some stability in the most fragmented (in my opinion) part of Kubernetes - installation.


Q : Are there any good tutorials / utilities to look at my existing clusters and see what access controls I need before activating RBAC?

Answer : Take a look at audit2rbac - allows you to scan the audit log for unauthorized API requests and create corresponding RBAC roles for them.

Supplement : This audit2rbac presentation is a good starting point.


Question : What developed project / integration are you most pleased with?

Answer # 1 : Istio .

Answer # 2 : Istio.

Answer number 3 : Cluster API ! Wow, he's finally available!


Project community


Question : Advise good resources for those who want to become an active contributor to the project (s) Kubernetes.

Answer # 1 : I always say that those who want to make a contribution need to choose one binary ( kubelet , controller manager , apiserver , etc.) and start reading with main() . If you can read like this for an hour, not finding anything worth fixing (or at least just renaming something for better readability), you are not looking carefully enough.

Answer # 2 : Starting with main() is also my preferred way of learning anything. The Kubernetes codebase may seem confusing and huge to new users, so I always remind you that there is nothing wrong with adding a few fmt.Printf() to the code, recompiling and running it. In addition, launching some parts of Kubernetes can be significantly more difficult than others. Sometimes you just need to be creative - we all left crazy pieces of code working on different parts of the system. Bash is your friend.

Answer # 3 : It’s best to start your contribution to the Kubernetes with the recently created Kubernetes Contributor guide . It describes in detail almost all aspects of making changes to the Kubernetes project and is the No. 1 source for those who want to become an active contributor to the project. In addition, we regularly conduct # meet-our-contributors (in Slack - approx. Transl. ) , Where you can ask your questions related to the process of making changes to Kubernetes.


Question : What is the main difficulty for Kubernetes in 2018?

Answer # 1 : Ready for enterprise. This is a classic 80/20 problem. The remaining work is difficult , “dirty” and hard in good grades.

Question clarification : What is the 20% that is so difficult?

Answer-clarification : Security requirements. Integration with networks. A long list of opportunities that people think they need. Integration with auditing. Policy management. Compliance with regulatory requirements and regulations. Each enterprise customer has accumulated over the years a “status quo” in an environment that must be reckoned with in order to be considered at all.

Addition : I think that part of the enterprise readiness problem is teaching people how to exploit Kubernetes applications. The way we work with orchestrated containers differs from the traditional methods used in the enterprise today. Teaching people what they need to succeed is an actual omission.

Answer # 2 : People. Project management is life-changing: we are at a turning point in which we need to figure out how to scale the human factor. Over this - the management of all people - we have to work.


Question : The strangest bug in K8s that you found or fixed?

Answer # 1 : It is possible that a one-time collection of metrics of a particular storage class could become infinite because the method used to do this clogged the underlying storage layer, which caused a delay in timestamps, which in turn caused missing metrics in Heapster.

Answer # 2 : The hanging kube-proxy caused ICMP “no route to host” errors, which led us to hell of a bewilderment and to search for a problem across the entire network stack except the recipient side.


Other ecosystem projects


Q : What advice would you give to the Docker Swarm team?

Answer : Take a close look at what has happened well and what is bad, and learn a lesson from it. But remember that not only technical issues are important. Do not underestimate the impact of timeliness and good luck. There are excellent engineers in the team who do a really good job. Swarm is of great value to many users.

Update : Docker Swarm is a great technology, but unfortunately, not everything in it is Open Source. I agree that timeliness and luck are very significant. Docker Swarm is great, and I would like to do a project with Kubernetes that helps users understand these paradigms in their work.


Question : What other CNCF projects are you most pleased with? Are there any other existing projects that you will be glad to see among them in the near future?

Answer # 1 : So many options. I think Prometheus and OpenTracing are great. Envoy also makes me happy. For a long time I worked with CNI , so it would be unfair not to mention him.

Answer # 2 : Envoy, Jaeger , Prometheus.

Answer # 3 : I am glad to see that Telepresence has applied for joining CNCF projects. Recently, a lot of great development tools for Kubernetes (as well as Draft, Skaffold, freshpod) have appeared - I expect that this area will grow!

Answer number 4 : kubicorn .


List of CNCF projects under Incubating (as of April 17, 2018)


Q : What are the plans for OpenShift after acquiring CoreOS by Red Hat? Will they be merged or maintained separately?

Answer : We expect a lot of news on this issue soon. The goal is to take the best of Tectonic and connect them with the best of OpenShift, as well as ensure that all these parts can be used directly with Kubernetes and simplify the expansion of Kubernetes with them and create applications on this base.


And finally - the most memorable answer to the question of what recommendations developers can give to medium and large companies migrating to K8s: “ Know what problems you are trying to solve. Don't boil the ocean . ”


PS from translator


Read also in our blog:

Source: https://habr.com/ru/post/353264/


All Articles