How Disney Is Realizing the Multi-Cloud Promise of Kubernetes

2029

The Walt Disney Company is famous for “making magic happen,” and their cross-cloud, enterprise level Kubernetes implementation is no different. In a brief but information-packed lightning talk at CloudNativeCon in Seattle in November, Disney senior cloud engineer Blake White laid out a few of the struggles and solutions in making Kubernetes work across clouds.

“Kubernetes does a lot of the heavy lifting for you but when you need to think about an enterprise and all of its needs, maybe you need to think a little bit outside of it,” White said. “Get your hands dirty, don’t rely on all the magic, make some of the magic happen for yourself.”

With an enterprise the size of Disney, there are a lot of development, QA, and engineering teams working on many different projects, and each one will have their own cloud environment. White said Kubernetes can handle that just fine, but it takes some adjustment to make everything work.

The first considerations are connectivity and data:

  • Does the project need to reach code repos, artifacts, or other services in your corporate network?

  • Is there data that needs to follow certain privacy standards?

  • How much latency is tolerable?

  • Do you need to interconnect between cloud accounts?

Both Amazon and Google have services that allow for connections across cloud accounts, and both can get a cluster up and running quickly, but neither system works flawlessly with Kubernetes, so White suggests not relying on an out-of-the-box solution if your project is complicated.

There are automated ways to bring up Kubernetes clusters; White mentioned both Kube-up.sh and kops as excellent options, but neither were as configurable as Disney needed, so they built their own bespoke system.

“We ended up building things our own and the main reason for that was because we needed our [virtual public cloud] to be connected back to our corporate network,” White said. The trickiest part with their build was setting up the DNS, he continued.

“We moved from sky DNS to kube DNS; that helped the cluster a lot, but in AWS things just weren’t working,” White explained. “Basically, our DHCP set for the VPC was skipping the Amazon internal and pointing just back to our corporate network, which was what we needed, but Kubernetes was unhappy because it couldn’t find all of the notes. We set up a bind server, pointed that at the AWS internal for internal stuff, and back to our corporate network for everything else… Everything started working again.”

White touched on logging at the tail end of his talk, giving tips on how not to incur unwanted expenses by shipping everything to a central repository, and therefore paying for egress. His solution was to keep everything next to the cluster and query only what you need, he said.

“We set up an ELK stack (Elasticsearch, Logstash and Kibana),” White said. “Be careful where you put your dashboards or else you’ll be shipping much more than you thought, and that works really well. Set up tribe nodes above it, and you can query across multiple clouds. It’s not the only solution but it’s a good solution.”

Watch the complete presentation below.

Do you need training to prepare for the upcoming Kubernetes certification? Pre-enroll today to save 50% on Kubernetes Fundamentals (LFS258), a self-paced, online training course from The Linux Foundation. Learn More >>