Helm: The Kubernetes Package Manager

1499

Back on October 15th 2016, Helm celebrated its one year birthday. It was first demonstrated ahead of the inaugural KubeCon conference in San Francisco in 2015. What is Helm? Helm aims to be the default package manager for Kubernetes.

In Kubernetes, distributed applications are made of various resources: Deployments, Services, Ingress, Volumes, and so on (as discussed in parts one and two of this series). You can create all those resources in your Kubernetes cluster using the kubectl client, but there is a need for a way to package them as a single entity. Creating a Package allows for simple sharing between users, tuning using a templating scheme, as well as provenance tracking, among other things. All in all, Helm tries to simplify complex application deployment on Kubernetes coupled with sharing of applications’ manifests.

Helm has been created by the folks at Deis and was donated to the Cloud Native Computing Foundation. Recently, Helm released version 2.0.0.

Helm is made of two components: A server called Tiller, which runs inside your Kubernetes cluster and a client called helm that runs on your local machine. A package is called a chart to keep with the maritime theme. Read the birthday retrospective from Matt Butcher to get the historical context of the naming.

With the Helm client, you can browse package repositories (containing published Charts) and deploy those Charts on your Kubernetes cluster. Helm will pull the Chart and talking to Tiller will create a release (an instance of a Chart). The release will be made of various resources running in the Kubernetes cluster.

Structure of a Chart

A Chart is easy to demystify; it is an archive of a set of Kubernetes resource manifests that make up a distributed application. Check the GitHub repository, where the Kubernetes community is curating Charts. As an example, let’s have a closer look at the MariaDB chart. The structure is as follows:

```

.

├── Chart.yaml

├── README.md

├── templates

│   ├── NOTES.txt

│   ├── _helpers.tpl

│   ├── configmap.yaml

│   ├── deployment.yaml

│   ├── pvc.yaml

│   ├── secrets.yaml

│   └── svc.yaml

└── values.yaml

```

The Chart.yaml contains some metadata about the Chart, such as its name, version, keywords, and so on. The values.yaml file contains keys and values that are used to generate the release in your Cluster. These values are replaced in the resource manifests using Go templating syntax. And, finally, the templates directory contains the resource manifests that make up this MariaDB application.

If we dig a bit deeper into the manifests, we can see how the Go templating syntax is used. For example, the database passwords are stored in a Kubernetes secret, and the database configuration is stored in a Kubernetes configMap.

We see that a set of labels are defined in the Secret metadata using the Chart name, release name etc. The actual values of the passwords are read from the values.yaml file.

```

apiVersion: v1

kind: Secret

metadata:

 name: {{ template "fullname" . }}

 labels:

   app: {{ template "fullname" . }}

   chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"

   release: "{{ .Release.Name }}"

   heritage: "{{ .Release.Service }}"

type: Opaque

data:

 mariadb-root-password: {{ default "" .Values.mariadbRootPassword | b64enc | quote }}

 mariadb-password: {{ default "" .Values.mariadbPassword | b64enc | quote }}

```

Similarly, the configMap manifest contain metadata that is computed on the fly when tiller expands the templates and creates the release. In addition, you can see below that the database configuration can be set in the values.yaml file, and if present is placed inside the configMap.

```

apiVersion: v1

kind: ConfigMap

metadata:

 name: {{ template "fullname" . }}

 labels:

   app: {{ template "fullname" . }}

   chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"

   release: "{{ .Release.Name }}"

   heritage: "{{ .Release.Service }}"

data:

 my.cnf: |-

{{- if .Values.config }}

{{ .Values.config | indent 4 }}

{{- end -}}

```

Bottom line: a Chart is an archive of a set of resource manifests that make an application. The manifests can be templatized using the Go templating syntax. A instantiated Chart is called a release, it reads values in the values.yaml file and replaces those values in the template manifests.

Using Helm

As always you can build Helm from source, or grab a release from the GitHub page. I expect to see Linux packages for the stable release. OSX users will also be able to get it quickly using Brew.

```

$ brew cask install helm

```

With helm installed, you can deploy the server-side tiller in your cluster. Note that this will create a deployment in the kube-system namespace.

```

$ brew init

$ helm init

Now, Tiller (the helm server-side component) has been installed into your Kubernetes cluster.

$ kubectl get deployments --namespace=kube-system

NAMESPACE     NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

kube-system   tiller-deploy   1         1         1            1           1m

```

The client will be able to communicate with the tiller Pod using port forwarding. Hence you will not see any service exposing tiller.

To deploy a Chart, you can add a repository and search for a keyword. Once Helm is officially released, the format of the repository indexed will be fixed and the default repository will be fully tested and usable.

```

$ helm repo add testing http://storage.googleapis.com/kubernetes-charts-testing

$ helm repo list

NAME       URL                                               

stable     http://storage.googleapis.com/kubernetes-charts   

local      http://localhost:8879/charts                      

testing    http://storage.googleapis.com/kubernetes-charts...


$ helm search redis

WARNING: Deprecated index file format. Try 'helm repo update'

NAME                        VERSION    DESCRIPTION                                       

testing/redis-cluster       0.0.5      Highly available Redis cluster with multiple se...

testing/redis-standalone    0.0.1      Standalone Redis Master                           

testing/example-todo        0.0.6      Example Todo application backed by Redis   

```

To deploy a Chart, just use the install command:

```

$ helm install testing/redis-standalone

Fetched testing/redis-standalone to redis-standalone-0.0.1.tgz

amber-eel

Last Deployed: Fri Oct 21 12:24:01 2016

Namespace: default

Status: DEPLOYED


Resources:

==> v1/ReplicationController

NAME               DESIRED   CURRENT   READY     AGE

redis-standalone   1         1         0         1s


==> v1/Service

NAME      CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE

redis     10.0.81.67   <none>        6379/TCP   0s

```

You will be able to list the release, delete it, and even upgrade it and rollback.

```

$ helm list

NAME         REVISION    UPDATED                     STATUS      CHART                 

amber-eel    1           Fri Oct 21 12:24:01 2016    DEPLOYED    redis-standalone-0.0.1

```

Underneath, of course, Kubernetes will have created its regular resources. In this particular case, a replication controller, svc, and a pod were created:

```

$ kubectl get pods,rc,svc

NAME                               READY     STATUS    RESTARTS   AGE

po/redis-standalone-41eoj          1/1       Running   0          6m

NAME                  DESIRED   CURRENT   READY     AGE

rc/redis-standalone   1         1         1         6m

NAME             CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE

svc/redis        10.0.81.67   <none>        6379/TCP   6m

```

And that’s it for a quick walkthrough of Helm. Expect stable, curated Charts to be available once Helm is released. This will give you quick access to packaged distributed applications with simple deployment, upgrade, and rollback capability.

Read the other articles in this series:

Getting Started With Kubernetes Is Easy With Minikube

Rolling Updates and Rollbacks using Kubernetes Deployments

Federating Your Kubernetes Clusters — The New Road to Hybrid Clouds

Enjoy Kubernetes with Python

Want to learn more about Kubernetes? Check out the new, online, self-paced Kubernetes Fundamentals course from The Linux Foundation. Sign Up Now!

Sebastien Goasguen (@sebgoa) is a long time open source contributor. Member of the Apache Software Foundation, member of the Kubernetes organization, he is also the author of the O’Reilly Docker cookbook. He recently founded skippbox, which offers solutions, services and training for Kubernetes.