Federating Your Kubernetes Clusters — The New Road to Hybrid Clouds

662

Over the past six months, federation of Kubernetes clusters has moved from proof of concept to a release that is worth checking. Federation was first introduced under somewhat of a code name — Ubernetes. And then, in Kubernetes v1.3.0, cluster federation appeared. Now, there is extensive documentation on the topic.

Why is it such a big deal? If you have followed the development of Kubernetes, you probably know that it is an open source rewrite of Borg, the system that Google uses internally to manage their containerized workloads across data centers. If you read the paper, you will notice that a single Kubernetes cluster is the equivalent of a Borg cell. As such, Kubernetes itself is not the complete equivalent of Borg. However, by adding cluster federation, Kubernetes can now distribute workloads across multiple clusters. This opens the door for more real Borg features, like failover across Zones, geographic load-balancing, workload migration, and so on.

Indeed, cluster federation in Kubernetes is a hybrid cloud solution.

How does it work ?

The picture below, taken from the Tectonic blog on Federation shows a high-level architectural view.

federation-api-4x.png

Image courtesy of coreOS.

You see three Kubernetes clusters (i.e., San Francisco, New York, and Berlin). Each of those runs an API server, controller, its own scheduler and etcd-based key value store. This is the standard Kubernetes cluster setup. You can use any of these clusters from your local machine, assuming you have an account set up on them and associated credentials. With the k8s client — kubectl — you can create multiple contexts and switch between them. For example, to list the nodes in each cluster, you would do something like:

```

$ kubectl config use-context sanfrancisco

$ kubectl get nodes

$ kubectl config use-context newyork

$ kubectl get nodes

$ kubectl config use-context berlin

$ kubectl get nodes

```

With federation, Kubernetes adds a separate API server (i.e., the Federation API server), its own etcd-based key value store and a control plane. In effect, this is the same setup for a regular cluster but at a higher level of abstraction. Instead of registering individual nodes with the Federated API server, we will register full clusters.

A cluster is defined as a federated API server resource, in a format consistent with the rest of the Kubernetes API specification. For example:

```

apiVersion: federation/v1beta1

kind: Cluster

metadata:

 name: new-york

spec:

 serverAddressByClientCIDRs:

   - clientCIDR: "0.0.0.0/0"

     serverAddress: "${NEWYORK_SERVER_ADDRESS}"

 secretRef:

   name: new-york

```

Adding a cluster to the federation is a simple creation step on the federated API server:

```

$ kubectl --context=federated-cluster create -f newyork.yaml

```

In the sample below, notice that I use a context federated cluster and that I used the k8s client. Indeed the Federated API server extends on the Kubernetes API and can be talked to using kubectl.

Also note that the federation components can (and actually should) run within a Kubernetes cluster.

Creating Your Own Federation

I will not show you all the steps here, as it would honestly make for a long blog. The official documentation is fine, but the best way to understand the entire setup is the walkthrough from Kelsey Hightower. This walkthrough uses Google GKE and Google Cloud DNS, but it can be adapted relatively easily for your own setup using your own on-premise clusters.

In short, the steps to create a Federation are:

1. Pick the cluster where you will run the federation components, and create a namespace where you will run them.

2. Create a Federation API server service that you can reach (i.e., LoadBalancer, NodePort, Ingress), create a secret containing the credentials for the account you will use on the federation and launch the API server as a deployment.

3. Create a local context for the Federation API server so that you can use kubectl to target it. Generate a kubeconfig file for it, store is as a secret, and launch the control plane. The control plane will be able to authenticate with the API server using the kubeconfig secret created.

4. Once the control plane is running, you are ready to add the Clusters. You will need to create secrets for each cluster’s kubeconfig. Then, with Cluster resource manifest on hand (see above), you can use kubectl to create them on the federation context.

The end result of this is that you should have a working federation. Your clusters should registered and ready. The following command will show them.

```

$ kubectl --context=federated-cluster get clusters

```

Migrating a Workload From One Cluster to Another

As federation matures, we can expect to see most Kubernetes resources available on the Federation API. Currently, only Events, Ingress, Namespaces, Secrets, Services and ReplicaSets are supported. Deployments should not be far off, because ReplicaSets are already in there. Deployments will be great because this will bring us rolling updates and rollbacks across clusters.

Creating a workload in the Federation is exactly the same thing as doing it on a single cluster. Create a resource file for a replica set and create it with kubectl targeting the federated cluster.

```

$ cat nginx.yaml

apiVersion: extensions/v1beta1

kind: ReplicaSet

metadata:

 name: nginx

spec:

 replicas: 4

 template:

   metadata:

     labels:

       app: nginx

   spec:

     containers:

       - name: nginx

         image: nginx:1.10

         

$ kubectl --context=federated-cluster create -f nginx.yaml

```

The really great concept, though, even at this early stage in federation support, is that you can already give some preference to a cluster in the federation. That way, when the Pods start, they may be scheduled more on one cluster and less on another. This is done via an annotation.

Add the following in the metadata of the replica set above:

```

 annotations:

   federation.kubernetes.io/replica-set-preferences: |

       {

           "rebalance": true,

           "clusters": {

               "new-york": {

                   "minReplicas": 0,

                   "maxReplicas": 10,

                   "weight": 1

               },

               "berlin": {

                   "minReplicas": 0,

                   "maxReplicas": 10,

                   "weight": 1

               }

           }

       }

```

If you scale to 10 replicas, you will see five pods appear on each cluster. Indeed, each one has the same weight in the annotation.

Now try this: edit the annotation and change the weight. For example, put 20 on one of the clusters and 0 in the other.

```

$ kubectl --context=federated-cluster apply -f nginx.yaml

```

You will see all your Pods “move” over to one cluster and disappear from the other. Edit the replicaset again, switch the weight the other way and do another apply. You will see the Pods “move” the other way.

This is not a migration in the sense copying memory between two hypervisors like we can do with VMs, but it is migration in a microservice sense, where we can move the services from region to region.

And this is just the beginning!

Read the previous articles in this series:

Getting Started With Kubernetes Is Easy With Minikube

Rolling Updates and Rollbacks using Kubernetes Deployments

Helm: The Kubernetes Package Manager

Enjoy Kubernetes with Python

Want to learn more about Kubernetes? Check out the new, online, self-paced Kubernetes Fundamentals course from The Linux Foundation. Sign Up Now!

Sebastien Goasguen (@sebgoa) is a long time open source contributor. Member of the Apache Software Foundation, member of the Kubernetes organization, he is also the author of the O’Reilly Docker cookbook. He recently founded skippbox, which offers solutions, services and training for Kubernetes.