Top 5 Reasons to Love Kubernetes

607

At LinuxCon Europe in Berlin I gave a talk about Kubernetes titled “Why I love Kubernetes? Top 10 reasons.” The response was great, and several folks asked me to write a blog about it. So here it is, with the first five reasons in this article and the others to follow. As a quick introduction, Kubernetes is “an open-source system for automating deployment, scaling and management of containerized applications” often referred to as a container orchestrator.

Created in June 2014 by Google, it currently has over 1000 contributors, more than 37k commits and over 17k stars on GitHub and is now under the governance of the Cloud Native Computing Foundation at The Linux Foundation. A recent private survey by Gartner listed Kubernetes as the leading system to manage containers at scale.

Choosing a distributed system to perform tasks in a datacenter is quite difficult, because comparing various solutions is much more complex than looking at a spreadsheet of features or performance. Measuring performance of systems like Kubernetes fairly is quite challenging due to the many variables we face. I believe that choosing a system also depends highly on past experiences, one’s own perspective and the skills available in a team. Yes, this does not sound rational but that’s what I believe. šŸ™‚

So here, in no particular order, are the top five reasons to like Kubernetes.

#1 The Borg Heritage

Kubernetes (k8s) inherits directly from Google’s long-time secret application manager: Borg. I often characterize k8s as a rewrite in the open of Borg.

Borg was a secret for a long time but was finally described in the Borg Paper. It is the system used by the famed Google Site Reliability Engineers (SRE) to manage Google applications like Gmail and even its own Cloud: GCE.

borg.png

Historically, Borg managed containerized applications because when it was created, hardware virtualization was not yet available and also because containers offered a fine grain compute unit to pack Google’s data-center and increase efficiency.

As a long-time cloud guy, what I found fascinating is that GCE runs on Borg. This means that the virtual machines we get from GCE are actually running in containers. Let that sink in. And, that GCE is a distributed application managed by Borg.

Hence, the killer reason to embrace Kubernetes for me, was that Google has been re-writing in the open the solution that manages their cloud. I often characterize this as “Imagine AWS was open sourcing EC2” — this would have solved us all a bunch of headaches.

So, read the Borg paper; even if you just skim through it, you will learn valuable insights into the thinking that went into Kubernetes.

#2 Easy to Deploy

This one is definitely going to be contentious, but when I jumped into Kubernetes in early 2015, I found that it was quite straightforward to set up.

First, you can run k8s on a single node, we will get back to that, but for a non-HA setup you just need a central manager and a set of workers. The manager runs three processes (i.e., API server, Scheduler, and a resource Controller) plus a key-value store using etcd and the workers run two processes (i.e., the Kubelet that watches over the containers and the Proxy that exposes services).

This architecture, at a high level is similar to Mesos, CloudStack, or OpenStack, for instance, as well as most non peer-to-peer systems. Replace etcd with zookeeper, replace the manager processes with Mesos master, and replace kubelet/proxy with Mesos worker, and you have Mesos.

When I started, I was able to quickly write an Ansible playbook that used CoreOS virtual machines and set up all the k8s components. CoreOS had the advantage of also shipping a network overlay (i.e., flannel) and Docker. The end result was that, in literally less than 5 minutes, I could spin up a k8s cluster. I have been updating that playbook ever since, and many others exist. So for me, spinning up k8s is one command:

```

$ ansible-playbook k8s.yml

```

Note that if you want to use Google Cloud, you have a service for Kubernetes cluster provisioning, the Google Container Engine (GKE) and getting a cluster is also one command that works great:

```

$ gcloud container clusters create foobar

```

While from my perspective this is “easy”, I totally understand that this may not be the case for everyone. Everything is relative, and reusing someone’s playbook can be a pain.

Meanwhile, Docker has done a terrific job when rewriting Swarm and embedding it into the Docker engine. They made creating a Swarm cluster as simple as running two bash commands.

If you like that type of setup, Kubernetes is now also shipping with a command called kubeadm, which lets you create a cluster from the CLI. Start a master node and have the workers join, and that is it.

```

$ kubeadm init

$ kubeadm join

```

I have also made a quick and dirty playbook for it, check it out.

#3 Development Solution with minikube

Quite often when you want to experiment with a system, take it for a quick ride, you do not want a full blown distributed setup in your data center or in the cloud. You just want to test it on your local machine.

Well, you’ve got minikube for that.

Download, install, and you are one bash command away from having a single-node, standalone Kubernetes instance.

```

$ minikube start

Starting local Kubernetes cluster...

Kubectl is now configured to use the cluster.

```

Within a short moment, minikube will have booted everything and you will have access to your single node k8s instance:

```

$ kubectl get nodes

NAME       STATUS    AGE

minikube   Ready     25s

```

By default, it will use Virtualbox on your machine and start a VM, which will run a single binary (i.e., `localkube`) that will give you Kubernetes quickly. That VM will also have Docker, and you could use it as a Docker host.

Minikube also allows you to test different versions of Kubernetes, as well as configure to test different features. It also comes with the Kubernetes dashboard, which you can open quickly with:

```

$ minikube dashboard

```

#4 Clean API that is easy to learn

There was a world before REST, and it was painful. It was painful to learn, to program, to use, and debug. It was also full of evolving and competing standards. But let’s not go there. That’s why I love clean REST APIs that I can look at and that I can test with curl. To me, the Kubernetes API has been a joy. Just a set of resources (or objects) with HTTP actions, with request and response that I can manipulate in JSON or YAML.

As Kubernetes is moving quite fast, I enjoy that the various resources are grouped in API Groups and well versioned. I know what is alpha or beta or stable, and I know where to check the specifications.

If you read reason #3 you already have minikube, right ? Then the fastest way to check the API is to dive straight into it:

```

$ minikube ssh

$ curl localhost:8080

{

 "paths": [

   "/api",

   "/api/v1",

   "/apis",

   "/apis/apps",

   "/apis/apps/v1alpha1",

...

```

You will see all the API groups and be able to explore the resources they contain, just try:

```

$ curl localhost:8080/api/v1

$ curl localhost:8080/api/v1/nodes

```

All resources have a kind, apiVersion, and metadata.

To learn about the schema of each resource, there is a Swagger API browser that is quite useful. I also often refer to the documentation when I am looking for a specific field in the schema. The next step to learn the API is actually to use the command-line interface to Kubernetes kubectl, which is reason #5

#5 Great CLI

Kubernetes does not leave you out in the cold, having to learn the API from scratch and then writing your own client. The command line client is there; it is called kubectl, and it is sentence based and extremely powerful.

You can manage your  entire Kubernetes cluster and all resources in it via kubectl.

Perhaps the toughest part of kubectl is how to install it or where to find it. There is room for improvement there.

Let’s get going with our minikube setup again and explore a few kubectl verbs like get, describe, and run.

```

$ kubectl get nodes

$ kubectl get nodes minikube -o json

$ kubectl describe nodes minikube

$ kubectl run ghost --image=ghost

```

That last command will start the blogging platform Ghost. You will shortly see a pod appear. A pod is the lowest compute unit in Kubernetes and the most basic resource. With the run command, Kubernetes created another resource called a deployment. Deployments provide a declarative definition of a containerized service (see it as a single microservice). Scaling this microservice is one command:

```

$ kubectl scale deployments/ghost --replicas=4

```

For every kubectl command you try, you can use two little tricks I love: –watch and –v=99. The watch flag will wait for events to happen, which feels a lot like the standard Linux watch command. The verbose flag with the value of 99 will give you the curl commands that can mimic what kubectl does. It is a great way to keep learning the API, find the resources it uses and the requests.

Finally, to get your mind blown you can just edit this deployment in place, it will trigger a rolling update.

```

$ kubectl edit deployment/ghost

```

Stay tuned for five more reasons to love Kubernetes.

So you’ve heard of Kubernetes but have no idea what it is or how it works? The Linux Foundation’s Kubernetes Fundamentals course will take you from zero to knowing how to deploy a containerized application and manipulate resources via the API. Sign up now!