First published on https://helm.sh/blog by Matt Fisher @bacongobbler
Helm 3 Preview: Charting our future – part 1: A history of Helm
On October 15th, 2015, the project now known as Helm was born. Only one year later, the Helm community joined the Kubernetes organization as Helm 2 was fast approaching. In June 2018, the Helm community joined the CNCF as an incubating project. Fast forward to today, and Helm 3 is nearing its first alpha release.
In this s blog post, I’ll provide some history on Helm’s beginnings, illustrate how we got where we are today, showcase some of the new features available for the first alpha release of Helm 3, and explain how we move forward from here.
In order, I’ll discuss:
- The history of the creation of Helm
- A Gentle Farewell to Tiller
- Chart Repositories
- Release Management
- Changes to Chart Dependencies
- Library Charts
- What’s Next?
A History of Helm
Helm was born
Helm 1 began as an open source project created by Deis. We were a small startup company acquired by Microsoft in the spring of 2017. Our other open source project – also called Deis – had a tool called deisctl
that was used for (among other things) installing and operating the Deis platform on a Fleet cluster. Fleet was one of the first “container orchestrator” platforms to exist at the time.
In mid-2015, we decided to shift gears, and the foundation of Deis (now re-named “Deis Workflow”) moved from Fleet to Kubernetes. One of the first things we had to rewrite was the installation tool, deisctl
. We used this tool to install and manage Deis Workflow on a Fleet cluster.
Modeled after package managers like Homebrew, apt, and yum, the focus of Helm 1 was to make it easy for users to package and install their applications on Kubernetes. We officially announced Helm in 2015 at the inaugural KubeCon in San Francisco.
Our first attempt at Helm worked, but had its fair share of limitations. It took a set of Kubernetes manifests – sprinkled with generators as YAML front-matter – and loaded the generated results into Kubernetes.
For example, to substitute a field in a YAML file, one would add the following to a manifest:
#helm:generate sed -i -e s|ubuntu-debootstrap|fluffy-bunny| my/pod.yaml
Makes you really happy that template languages exist today, eh?
For many reasons, this early Kubernetes installer required a hard-coded list of manifest files and performed only a small fixed sequence of events. It was painful enough to use that the Deis Workflow R&D team was having a tough time replatforming their product around it, but the seed of an idea was there. Our first attempt was a very successful learning opportunity: we learned that we were passionate about building pragmatic solutions that solved real day-to-day problems for our users.
Learning from our past mistakes, we started designing Helm 2.
Designing Helm 2
As 2015 wound to a close, a team from Google reached out to the Helm team. They, too, had been working on a similar tool for Kubernetes. Deployment Manager for Kubernetes was a port of an existing tool they used for Google Cloud Platform. Would we be interested, they asked, in spending a few days talking about similarities and differences?
In January 2016, the Helm and Deployment Manager teams sat down in Seattle to share some ideas. We walked out with a bold plan: merge the projects to create Helm 2. Along with Deis and Google, SkippBox joined the development team, and we started work on Helm 2.
Our goal was to maintain Helm’s ease of use, but add the following:
- Chart templates for customization
- In-cluster management for teams
- A first-class chart repository
- A stable and signable package format
- A strong commitment to semantic versioning and retaining backward compatibility version-to-version
To accomplish these goals, we added a second component to the Helm ecosystem. This in-cluster component was called Tiller, and it handled installing and managing Helm charts.
Since the release of Helm 2 in 2016, Kubernetes added several major features. Role-Based Access Control (RBAC) was added and eventually replaced Attribute-Based Access Control (ABAC). Many new resource types were introduced (Deployments were still in beta at the time). Custom Resource Definitions (then called Third Party Resources, or TPRs) were invented. And most importantly, a set of best practices emerged.
Throughout all of these changes, Helm continued to serve the needs of Kubernetes users. After 3 years and many new feature additions, it became a good idea to introduce some major changes to the code base so that Helm would continue to meet the needs of this evolving ecosystem.
Helm 3 preview: Charting our future – Part 2: A gentle farewell to tiller
During the Helm 2 development cycle, we introduced Tiller as part of our integration with Google’s Deployment Manager. Tiller played an important role for teams working on a shared cluster – it made it possible for multiple different operators to interact with the same set of releases.
With role-based access controls (RBAC) enabled by default in Kubernetes 1.6, locking down Tiller for use in a production scenario became more difficult to manage. Due to the vast number of possible security policies, our stance was to provide a permissive default configuration. This allowed first-time users to start experimenting with Helm and Kubernetes without having to dive headfirst into the security controls. Unfortunately, this permissive configuration could grant a user a broad range of permissions they weren’t intended to have. DevOps and SREs had to learn additional operational steps when installing Tiller into a multi-tenant cluster.
After hearing how community members were using Helm in certain scenarios, we found that Tiller’s release management system did not need to rely upon an in-cluster operator to maintain state or act as a central hub for Helm release information. Instead, we could simply fetch information from the Kubernetes API server, render the Charts client-side, and store a record of the installation in Kubernetes.
Tiller’s primary goal could be accomplished without Tiller, so one of the first decisions we made regarding Helm 3 was to completely remove Tiller.
With Tiller gone, the security model for Helm is radically simplified. Helm 3 now supports all the modern security, identity, and authorization features of modern Kubernetes. Helm’s permissions are evaluated using your kubeconfig file. Cluster administrators can restrict user permissions at whatever granularity they see fit. Releases are still recorded in-cluster, and the rest of Helm’s functionality remains.
Helm 3 preview: Charting our future – Part 3: Chart repositories
At a high level, a Chart Repository is a location where Charts can be stored and shared. The Helm client packs and ships Helm Charts to a Chart Repository. Simply put, a Chart Repository is a basic HTTP server that houses an index.yaml file and some packaged charts.
While there are several benefits to the Chart Repository API meeting the most basic storage requirements, a few drawbacks have started to show:
- Chart Repositories have a very hard time abstracting most of the security implementations required in a production environment. Having a standard API for authentication and authorization is very important in production scenarios.
- Helm’s Chart provenance tools used for signing and verifying the integrity and origin of a chart are an optional piece of the Chart publishing process.
- In multi-tenant scenarios, the same Chart can be uploaded by another tenant, costing twice the storage cost to store the same content. Smarter chart repositories have been designed to handle this, but it’s not a part of the formal specification.
- Using a single index file for search, metadata information, and fetching Charts has made it difficult or clunky to design around in secure multi-tenant implementations.
Docker’s Distribution project (also known as Docker Registry v2) is the successor to the Docker Registry project, and is the de-facto toolset to pack, ship, store, and deliver Docker images. Many major cloud vendors have a product offering of the Distribution project, and with so many vendors offering the same product, the Distribution project has benefited from many years of hardening, security best practices, and battle-testing, making it one of the most successful unsung heroes of the open source world.
But did you know that the Distribution project was designed to distribute any form of content, not just container images?
Thanks to the efforts of the Open Container Initiative (or OCI for short), Helm Charts can be hosted on any instance of Distribution. The work is experimental, with login support and other features considered “table stakes” for Helm 3 yet to be finished, but we’re very excited to learn from previous discoveries that the OCI and Distribution teams have made over the years, learning through their mentorship and guidance on what it means to run a highly available service at scale.
I wrote a more detailed deep-dive on some of the upcoming changes to Helm Chart Repositories if you’d like to read more on the subject.
Helm 3 preview: Charting our future – Part 4: Release management
In Helm 3, an application’s state is tracked in-cluster by a pair of objects:
- The release object: represents an instance of an application
- The release version secret: represents an application’s desired state at a particular instance of time (the release of a new version, for example)
A helm install
creates a release object and a release version secret. A helm upgrade
requires an existing release object (which it may modify) and creates a new release version secret that contains the new values and rendered manifest.
The release object contains information about a release, where a release is a particular installation of a named chart and values. This object describes the top-level metadata about a release. The release object persists for the duration of an application lifecycle, and is the owner of all release version secrets, as well as of all objects that are directly created by the Helm chart.
The release version secret ties a release to a series of revisions (install, upgrades, rollbacks, delete).
In Helm 2, revisions were merely incremental. helm install
created v1, a subsequent upgrade created v2, and so on. The release and release version secret were collapsed into a single object known as a revision. Revisions were stored in the same namespace as Tiller, meaning that each release name was “globally” namespaced; as a result, only one instance of a name could be used.
For Helm 3, a release has one or more release version secrets associated with it. The release object always describes the current release deployed to Kubernetes. Each release version secret describes just one version of that release. An upgrade operation, for example, will create a new release version secret, and then modify the release object to point to this new version. Rollback operations can use older release version secrets to roll back a release to a previous state.
With Tiller gone, Helm 3 stores release data in the same namespace as the release’s destination. This change allows one to install a chart with the same release name in another namespace, and data is persisted between cluster upgrades/reboots in etcd. You can install WordPress into namespace “foo” as well as namespace “bar”, and both releases can be referred to as “wordpress”.
Helm 3 preview: Charting our future – Part 5: Changes to chart dependencies
Charts that were packaged (with helm package
) for use with Helm 2 can be installed with Helm 3, but the chart development workflow received an overhaul, so some changes are necessary to continue developing charts with Helm 3. One of the components that changed was the chart dependency management system.
The Chart dependency management system moved from requirements.yaml and requirements.lock to Chart.yaml and Chart.lock, meaning that charts that relied on the helm dependency
command will need some tweaking to work in Helm 3.
Let’s take a look at an example. Let’s add a dependency to a chart in Helm 2 and then look at how that changed in Helm 3.
In Helm 2, this is how a requirements.yaml looked:
dependencies:
- name: mariadb
version: 5.x.x
repository: https://kubernetes-charts.storage.googleapis.com/
condition: mariadb.enabled
tags:
- database
In Helm 3, the same dependency is expressed in your Chart.yaml:
dependencies:
- name: mariadb
version: 5.x.x
repository: https://kubernetes-charts.storage.googleapis.com/
condition: mariadb.enabled
tags:
- database
Charts are still downloaded and placed in the charts/ directory, so subcharts vendored into the charts/ directory will continue to work without modification.
Helm 3 preview: Charting our future – Part 6: Introducing library charts
Helm 3 supports a class of chart called a “library chart”. This is a chart that is shared by other charts, but does not create any release artifacts of its own. A library chart’s templates can only declare define
elements. Globally scoped non-define content is simply ignored. This allows users to re-use and share snippets of code that can be re-used across many charts, avoiding redundancy and keeping charts DRY.
Library charts are declared in the dependencies
directive in Chart.yaml, and are installed and managed like any other chart.
dependencies:
- name: mylib
version: 1.x.x
repository: quay.io
We’re very excited to see the use cases this feature opens up for chart developers, as well as any best practices that arise from consuming library charts.
Helm 3 preview: Charting our future – Part 6: Introducing library charts
This is part 6 of 7 of our Helm 3 Preview: Charting Our Future blog series on library charts. You can find our previous blog post on the Helm chart dependencies here.
Helm 3 supports a class of chart called a “library chart”. This is a chart that is shared by other charts, but does not create any release artifacts of its own. A library chart’s templates can only declare define
elements. Globally scoped non-define content is simply ignored. This allows users to re-use and share snippets of code that can be re-used across many charts, avoiding redundancy and keeping charts DRY.
Library charts are declared in the dependencies
directive in Chart.yaml, and are installed and managed like any other chart.
dependencies:
- name: mylib
version: 1.x.x
repository: quay.io
We’re very excited to see the use cases this feature opens up for chart developers, as well as any best practices that arise from consuming library charts.
Helm 3 preview: Charting our future – Part 7: What’s next?
Helm 3.0.0-alpha.1 is the foundation upon which we’ll begin to build the next version of Helm. The features above are some of the big promises we made for Helm 3. Many of those features are still in their early stages and that is OK; the idea of an alpha release is to test out an idea, gather feedback from early adopters, and validate those assumptions.
Once the alpha has been released, we can start accepting patches from the community for Helm 3. We should have a stable foundation on which to build and accept new features, and users should feel empowered to open tickets and contribute fixes.
In this blog , I have tried to highlight some of the big improvements coming to Helm 3, but this list is by no means exhaustive. The full plan for Helm 3 includes features such as improved upgrade strategies, deeper integrations with OCI registries, and applying JSON schemas against chart values for validation purposes. We’re also taking a moment to clean up the codebase and updating parts that have languished over the last three years.
If you feel like a topic was missed, we’d love to hear your thoughts!
Feel free to join the discussion in our Slack channels:
#helm-users
for questions and just to hang out with the community#helm-dev
for discussing PRs, code, and bugs