📜 ⬆️ ⬇️

Think twice before using Helm

Helm without HYIP. Sober look


Helm is a package manager for Kubernetes.


At first glance, not bad. This tool greatly simplifies the process of release, but sometimes it can also bring trouble, nothing can be done!
image


Helm was recently officially recognized by the top- level project @CloudNativeFdn , it is widely used by the community. This says a lot, but I would like to briefly talk about the unpleasant moments associated with this package manager.


What is the true value of Helm?


I still cannot answer this question with confidence. Helm does not provide any special features. What benefits does Tiller (server side) bring?


Many Helm charts are far from perfect, and additional efforts are needed to use them in the Kubernetes cluster. For example, they lack RBAC, resource limits and network policies. Just to take and set the Helm chart in binary form - without thinking about how it will work - will not work.


It is not enough to extol Helm by giving the simplest examples. You explain why he is so good - especially from the point of view of a secure multi-tenant working environment.


Words are empty. You will show me the code!
—Lynus Torvalds

Additional level of authorization and access control


I remember someone comparing Tiller with a “huge sudo server”. In my opinion, this is just another level of authorization, which at the same time requires additional TLS certificates, but does not provide opportunities for access control. Why not use the Kubernetes API and the existing security model with auditing and RBAC support?


A laudable pattern processing tool?


The point is that the configuration from the values.yaml file is used for processing and static analysis of the Go template files, and then the processed Kuberentes manifest is used with the corresponding metadata stored in ConfigMap.


And you can use a few simple commands:


 $ # render go-template files using golang or python script $ kubectl apply --dry-run -f . $ kubectl apply -f . 

I noticed that developers usually used one values.yaml file on a Wednesday or even got it from values.yaml.tmpl before using it.


This does not make sense when working with Kubernetes secrets, which are often encrypted and have several versions in the repository. To circumvent this limitation, you need to use the helm-secrets plugin or the --set key=value command. In any case, another level of difficulty is added.


Helm as an infrastructure lifecycle management tool


Forget it. This is not possible, especially if we are talking about the main components of Kubernetes, such as kube-dns, a supplier of CNI, cluster autoscaler, etc. The life cycles of these components are different, and Helm does not fit into them.


My experience with Helm shows that this tool is great for simple deployments on Kubernetes basic resources, which are easy to use from scratch and do not involve a complicated release process.


Unfortunately, Helm does not cope with more complex and frequent deployments including Namespace, RBAC, NetworkPolicy, ResourceQuota and PodSecurityPolicy.


I understand that Helm fans may not like my words, but such is the reality.


Helm Condition


The Tiller server stores information in ConfigMap files inside Kubernetes. It does not need its own database.

Unfortunately, the ConfigMap size cannot exceed 1 MB due to the limitations of etcd .


I hope someone will come up with a way to improve the ConfigMap storage driver to compress the serialized version before moving it to storage. However, this way, I think, the real problem will not be solved anyway.


Random failures and error handling


For me, the biggest problem with Helm is its insecurity.


Error: UPGRADE FAILED: "foo" has no deployed releases


This, IMHO, is one of the most annoying problems of Helm.


If the first version could not be created, each subsequent attempt will fail with an error indicating that the update cannot be updated from an unknown state.


The following change insertion request “fixes” the error by adding the --force flag, which in fact simply masks the problem by running the helm delete & helm install —replace command.


However, in most cases, you will have to do a full purge release.


 helm delete --purge $RELEASE_NAME 

Error: release foo failed: timed out


If there is no ServiceAccount or RBAC does not allow the creation of a specific resource, Helm will return the following error message:


 Error: release foo failed: timed out waiting for the condition 

Unfortunately, the root cause of this error cannot be seen:


 kubectl -n foo get events --sort-by='{.lastTimestamp}' 

 Error creating: pods "foo-5467744958" is forbidden: error looking up service account foo/foo: serviceaccount "foo" not found 

Feil out of the blue


In the most neglected cases, Helm gives an error, without doing anything at all. For example, sometimes it does not update resource limits.


helm init runs the tiller with one copy and not in the HA configuration


Tiller does not assume high availability by default, and the request to enable changes by reference is still open.


One day this will lead to downtime ...


Helm 3? Operators? Future?


In the next version of Helm, some promising features will be added:



For more information, see the Helm 3 Project Proposals .


I really like the idea of ​​architecture without Tiller, but the Lua-based scripts raise doubts, because they can complicate the charts.


I noticed that recently operators are gaining popularity, which are much more suitable for Kubernetes than the Helm charts.


I really hope that the community will soon deal with the problems of Helm (with our help, of course), but for the time being I will try to use this tool as little as possible.


Understand correctly: this article is my personal opinion, to which I have come, creating a hybrid cloud platform for deployments based on Kubernetes.


')

Source: https://habr.com/ru/post/429340/


All Articles