Note trans. : May 16 of this year is a significant milestone in the development of the package manager for Kubernetes - Helm. On this day, the first alpha release of the future major version of the project - 3.0 was presented. Its release will bring to the Helm significant and long-awaited changes, which many in the Kubernetes community have high hopes for. We ourselves are one of those, since we actively use Helm to deploy applications: we have integrated it into our tool for implementing CI / CD werf and occasionally make a feasible contribution to the development of upstream. This translation combines 7 notes from the official Helm blog, which are confined to the first alpha release of Helm 3 and tell about the history of the project and the main features of Helm 3. Their author is Matt "bacongobbler" Fisher, a Microsoft employee and one of the key maintainers of Helm. October 15, 2015 was born the project, now known as Helm. Just a year after its founding, the Helm community joined Kubernetes, actively working on Helm 2. In June 2018, Helm
joined the CNCF as a developing (incubating) project. Fast forward to the present - and now the first alpha release of the new Helm 3 is on its way
(this release has already taken place in mid-May - approx. Transl.) .
')
In this article I will talk about how it all began, as we have reached the present stage, present some of the unique features available in the first alpha release of Helm 3, and explain how we plan to develop further.
Summary:
- history of the creation of Helm;
- a gentle farewell to Tiller;
- charts repositories;
- release management;
- changes in dependency charts;
- library charts;
- what's next?
History of Helm
Birth
Helm 1 began as an open source project created by Deis. We were a small startup,
absorbed by Microsoft in the spring of 2017. Our other Open Source project, also named Deis, had a
deisctl
tool that was used (among other things) to install and operate the Deis platform in
the Fleet cluster . At the time, Fleet was one of the first container orchestration platforms.
In mid-2015, we decided to change course and transferred Deis (renamed Deis Workflow at the time) from Fleet to Kubernetes. One of the first tools was redesigned installation
deisctl
. We used it to install and manage Deis Workflow in a Fleet cluster.
Helm 1 was created in the image and likeness of well-known package managers, such as Homebrew, apt and yum. His main task was to simplify tasks such as packaging and installing applications in Kubernetes. Helm was officially introduced in 2015 at the KubeCon conference in San Francisco.
Our first attempt with Helm worked, but it was not without serious limitations. He took a set of Kubernetes manifests, arraigned by generators as input YAML blocks
(front-matter) *, and loaded the results into Kubernetes.
* Approx. trans. : From the first version of Helm, YAML syntax was chosen for describing Kubernetes resources, and Jinja templates and Python scripts were supported when writing configurations. In more detail about this and the device of the first version of Helm in general, we wrote in the chapter "A Short History of Helm" of this material .For example, to replace a field in a YAML file, you would add the following construction to the manifest:
#helm:generate sed -i -es|ubuntu-debootstrap|fluffy-bunny| my/pod.yaml
It's great that there are template engines today, isn't it?
For many reasons, this early Kubernetes installer required a hard-coded list of manifest files and executed only a small fixed sequence of events. It was so hard to use it that the R & D team of Deis Workflow had to be hard when they tried to transfer their product to this platform - however, the seeds of the idea had already been sown. Our first attempt was a great learning opportunity: we realized that we were truly passionate about creating pragmatic tools that solve everyday problems for our users.
Based on the experience of past mistakes, we began to develop Helm 2.
Making Helm 2
At the end of 2015, the Google team contacted us. They worked on a similar tool for Kubernetes. The deployment manager for Kubernetes was the port of the existing tool that was used for the Google Cloud Platform. “Do we want to,” they asked, “spend a few days discussing similarities and differences?”
In January 2016, the Helm and Deployment Manager teams met in Seattle to exchange ideas. The negotiations ended with an ambitious plan: to merge both projects to create Helm 2. Together with Deis and Google, the guys from
SkippBox joined the development team
(now part of Bitnami - approx. Transl.) , And we started to work on Helm 2.
We wanted to keep Helm easy to use, but add the following:
- chart patterns for customization;
- intracluster control for commands;
- first-class charts repository;
- stable package format with the ability to sign;
- strong commitment to semantic versioning and maintaining backward compatibility between versions.
To achieve these goals, a second element has been added to the Helm ecosystem. This intracluster component was called Tiller and was engaged in the installation and management of the Helm-charts.
Since the release of Helm 2 in 2016, Kubernetes has been overgrown with several major innovations. Role-based access control (
RBAC ) has emerged, which eventually replaced attribute-based access control (ABAC). New types of resources were introduced (Deployments at that time still remained in beta status). Custom Resource Definitions were invented (originally called Third Party Resources or TPRs). And most importantly - a set of best practices appeared.
Against this background, Helm continued to serve Kubernetes faithfully and faithfully. After three years and many new additions, it became clear that it was time to make significant changes to the code base so that Helm could continue to meet the growing needs of the developing ecosystem.
Gentle goodbye to Tiller
During the development of Helm 2, we introduced Tiller as part of our integration with Google's deployment manager. Tiller played an important role for teams working within a common cluster: it allowed different specialists operating the infrastructure to interact with the same set of releases.
Since role-based access control (RBAC) was enabled by default in Kubernetes 1.6, working with Tiller in production became more difficult. Due to the huge number of possible security policies, our position was to propose a permissive configuration by default. This allowed beginners to experiment with Helm and Kubernetes without first having to dive into security settings. Unfortunately, this permissive configuration could give the user too wide a range of permissions that he did not need. DevOps and SRE engineers had to learn additional operational steps, setting Tiller into a multi-tenant cluster.
After learning how community representatives use Helm in specific situations, we realized that Tiller’s release management system did not need to rely on the intracluster component to maintain state or function as a central hub with release information. Instead, we could simply receive information from the Kubernetes API server, generate a client-side chart, and save the installation record to Kubernetes.
The main task of Tiller could be accomplished without Tiller, therefore one of our first decisions regarding Helm 3 was the complete rejection of Tiller.
With Tiller leaving, the security model Helm has radically simplified. Helm 3 now supports all modern methods of security, identification and authorization of the current Kubernetes. Helm permissions are defined using
the kubeconfig file . Cluster administrators can restrict user rights with any level of detail. The releases are still stored inside the cluster, the rest of the Helm functionality is preserved.
Chart Repositories
At a high level, the charts repository is a place where charts can be stored and shared. Helm client packs and sends charts to the repository. Simply put, the charts repository is a primitive HTTP server with an index.yaml file and some packed charts.
Although there are some advantages in that the charts repository API meets the most basic storage requirements, it also has several drawbacks:
- Chart repositories are poorly compatible with most security implementations needed in a production environment. Having a standard API for authentication and authorization is extremely important in production scenarios.
- Helm's traceability tools used to sign, verify the integrity and origin of the charts are an optional part of the Chart publication process.
- In multi-user scenarios, the same chart can be loaded by another user, doubling the amount of space needed to store the same content. Smarter repositories have been developed to solve this problem, but they are not part of the formal specification.
- Using a single index file for searching, storing metadata and obtaining charts complicated the development of secure multi-user implementations.
The
Docker Distribution project (also known as Docker Registry v2) is the successor to the Docker Registry and actually serves as a set of tools for packaging, sending, storing and delivering Docker images. Many large cloud services offer Distribution based products. Due to such increased attention, the Distribution project has benefited from many years of improvements, best practices in the field of security and testing in “combat” conditions, which turned it into one of the most successful unsung heroes of the world of Open Source.
But did you know that the Distribution project was designed to distribute any form of content, not just container images?
Thanks to the efforts of the
Open Container Initiative (or OCI), Helm charts can be placed on any Distribution instance. While this process is experimental. The work on support of logins and other functions necessary for a full-fledged Helm 3 is not over yet, but we are very pleased with the opportunity to learn from the discoveries made by the OCI and Distribution teams over the years. And thanks to their mentoring and leadership, we learn what is the operation of high-availability service on a large scale.
A more detailed description of some upcoming changes in the Helm-charts repositories is available
here .
Release Management
In Helm 3, the state of an application is monitored within a cluster by a pair of objects:
- release object - represents an application instance;
- release version secret - represents the desired state of the application at a specific point in time (for example, the release of a new version).
Calling
helm install
creates a release object and a release version secret. Calling
helm upgrade
requires a release object (which it can change) and creates a new release version secret containing the new values ​​and the prepared manifest.
The release object contains release information, where the release is the specific installation of the named chart and values. This object describes top-level release metadata. The release object persists throughout the entire life cycle of the application and is the owner of all release version of the secrets, as well as of all the objects that are directly created by the Helm-chart.
Release version secret connects the release with a series of revisions (installation, updates, rollbacks, removal).
In Helm 2, revisions were exceptionally consistent. The call to
helm install
created v1, the subsequent upgrade (upgrade) - v2, and so on. The release and release version secret were collapsed into a single object known as revision. The revisions were stored in the same namespace as Tiller, which meant that each release was “global” in terms of the namespace; as a result, only one instance of the name could be used.
In Helm 3, each release is associated with one or more release version secret. The release object always describes the current release deployed in Kubernetes. Each release version secret describes only one version of this release. An upgrade, for example, will create a new release version secret and then change the release object to point to this new version. In the case of rollback, you can use the previous release version secret to roll back the release to a previous state.
After rejecting Tiller, Helm 3 stores the release data in a single namespace with the release. This change allows you to install a chart with the same release name into a different namespace, and the data is saved between updates / reloads of the cluster in etcd. For example, you can install Wordpress in the namespace "foo", and then in the namespace "bar", and both releases can be called "wordpress".
Changes in chart dependencies
Charts packed (using
helm package
) for use with Helm 2 can be installed with Helm 3, however, the development of the charts has been completely revised, so some changes need to be made to continue the development of charts with Helm 3. In particular, the management system has changed dependencies charts.
The dependency management system of the chart has moved from
requirements.yaml
and
requirements.lock
to
Chart.yaml
and
Chart.lock
. This means that the charts that used the
helm dependency
command require some configuration in order to work in Helm 3.
Let's take an example. Add a dependency to the chart in Helm 2 and see what changes when you go to Helm 3.
In Helm 2
requirements.yaml
looked like this:
dependencies: - name: mariadb version: 5.xx repository: https://kubernetes-charts.storage.googleapis.com/ condition: mariadb.enabled tags: - database
In Helm 3, the same dependency will be reflected in your
Chart.yaml
:
dependencies: - name: mariadb version: 5.xx repository: https://kubernetes-charts.storage.googleapis.com/ condition: mariadb.enabled tags: - database
Charts are still loaded and placed in the
charts/
directory, so the subcharts in the
charts/
directory will continue to work without changes.
Introducing Library Charts
Helm 3 supports the class of charts, called the library charts
(library chart) . This chart is used by other charts, but does not independently create any release artifacts. Library chart templates can only declare
define
elements. Other content is simply ignored. This allows users to reuse and share code fragments that can be used in many charts, thereby avoiding duplication and adhering to the principle of
DRY .
Library charts are declared in the
dependencies
section of the
Chart.yaml
file. Installation and management do not differ from other charts.
dependencies: - name: mylib version: 1.xx repository: quay.io
We are looking forward to the use cases that this component will open to the developers of the charts, as well as the best practices that may arise due to the library charts.
What's next?
Helm 3.0.0-alpha.1 - the basis, based on which, we begin to create a new version of Helm. In the article I described some interesting features of Helm 3. Many of them are still in the early stages of development and this is normal; The essence of the alpha release is to check the idea, collect feedback from the first users and confirm our assumptions.
As soon as the alpha version is released
(recall that this has already happened - approx. Transl.) , We will start accepting patches for Helm 3 from the community. You need to create a solid foundation that will allow you to develop and adopt new functionality, and users will be able to feel involved in the process, opening tickets and making corrections.
In the article I tried to highlight some of the major improvements that will appear in Helm 3, but this list is by no means exhaustive. A full-scale plan for Helm 3 includes innovations such as improved update strategies, deeper integration with OCI registries, and the use of JSON schemas to check chart values. We also plan to clear the code base and update those parts of it that have been neglected over the past three years.
If you feel that we have missed something, we will be glad to hear your thoughts!
Join the discussion in our
Slack channels :
#helm-users
for questions and simple communication with the community;#helm-dev
to discuss pull requests, code and bugs.
You can also chat at our weekly Public Developer Calls on Thursdays at 7:30 pm MSK. Meetings are dedicated to discussing the tasks that key developers and the community are working on, as well as topics for discussion for a week. Anyone can join and take part in the meeting. The link is available on the Slack channel
#helm-dev
.
PS from translator
Read also in our blog: