Control Plane Engineering Is Key for Big Kubernetes Deployments

598

If you’re interested in running a complex Kubernetes system across several different cloud environments, you should check out what Bob Wise and his team at Samsung SDS call “Control Plane Engineering.”

Wise, during his keynote at CloudNativeCon last year, explained the concept of building a system that sits on top of the server nodes to ensure better uptime and performance across multiple clouds, creates a deployment that’s easily scaled by the ClusterOps team, and covers long-running cluster requirements.

“[If you believe] the notion of Kubernetes as a great way to run the same systems on multiple clouds, multiple public clouds, and multiple kinds of private clouds is really important, and if you care about that, you care about control plane engineering,” Wise said.

By focusing on that layer, and sharing configuration and performance information with the Kubernetes community, Wise said larger Kubernetes deployments can become easier and more manageable.

”One of the things we’re trying to foster, trying to build some tooling and make some contribution around is a way for members of the community to grab their cluster configuration, what they have including things like setting of cluster, be able to grab that, dump that, and capture that and export it for sharing, and also to take performance information from that cluster and do the same,” Wise said. “The goal here is, across a wide range of circumstances, to be able to start compare notes across the community.”

For the work Wise and his team have done, the Control Plane involves four separate parts that sit atop the nodes to make sure things work optimally despite occasional machine failure and broken nodes.

The Control Plane includes:

  • An API Server on the front end through which all the components interact,

  • A Scheduler to assign pods to nodes,

  • The ETCD, a distributed database system where cluster state is maintained, and

  • A Controller Manager, which is the home for embedded control loops like replica sets, deployments, jobs, etc.

The best way to run the system so that it has some level of allocation automation is through Kubernetes self-hosting, Wise said. But that requires some “tricky bootstrapping” to build it. In the end, it’s worth it if you’re running a large cluster, however.

“The idea here is it’s a system entirely running as Kubernetes objects,” he said.  “You have this common operation set. It’s going to make scaling … and HA easier.”

One piece that is perhaps better not to try to build on your own is a load balancer for the API Server, which can get bogged down because it’s a bottleneck into the system. Wise said using a cloud provider’s load balancer is the easiest, and in the end, probably best solution.

“This load balancer, this is a very key part to the overall system performance and availability,” Wise said. “The public cloud providers have put enormous investment into really great solutions here. Use them and to be happy.

“It’s worth the configuration drift that happens between multiple deployments,” Wise continued. “I’d also say again, if you have on premises and you’re trying to do deployments and you already have these load balancers then they work well, they’re pretty simple to configure usually. The configurations that Kubernetes requires for support are not especially complicated. If you have them, use them, be happy but I wouldn’t recommend going and buying those appliances new.”

Watch the complete presentation below:

Want to learn more about Kubernetes? Get unlimited access to the new Kubernetes Fundamentals training course for one year for $199. Sign up now!