Building your IT infrastructure is often a complicated dance of negotiating conflicting needs. Engineers have ideas of cool things they want to build. Operations wants stability, security, easy maintenance, and scalability. Users wants things to work with no backtalk. In his talk at MesosCon Asia 2016, Frans van Rooyen of Adobe shares his team’s experiences with adapting Apache Mesos and Apache DC/OS to support deploying infrastructure to multiple diverse clouds.
Adobe had several problems to solve: they wanted to make it easier for engineers to write and deploy new code without having to be cloud experts, and they wanted to solve conflicts between operations and engineering. Engineering wants AWS because “It’s agile, I don’t have to create tickets, and I have infrastructure’s code. So I can actually call the infrastructure programmatically and build it, that’s what I like and want to do it that way.” Operations wants the local data center instead of a public cloud because “It’s secure, it’s cheaper, and we have more control.”
The various public clouds, such as Azure and AWS, have their own ways of doing things, and everyone has their own experience and preferences. Adobe’s solution was to abstract out the details of deploying to specific clouds so engineers could write simple spec files and let the new abstraction layer handle the details of deployment. Rooyen says, “Then suddenly all you care about is where do I need to run my stuff to run most effectively. The two main things that come up are latency and data governance. So if you’re thinking about where do I need to run my stuff, where do I need to run my container as an engineer, in my spec file I can say, “It’s very latent sensitive. It needs to be in a location over in Europe.” Because of that, Operations now can take that requirement and run that in the appropriate cloud… Operations and Engineering don’t battle. Engineering doesn’t care because they know their container’s going to go and run where it needs to run most.”
Another goal was to standardize as much as possible. But as the various cloud APIs are not standard it is very difficult to build a single tool to deploy to all clouds. Cloud technologies are fast-moving targets, so maintaining a single deployment tool would require constant maintenance. Rooyen discusses some of the cloud-specific tools they use: the Azure Container Service engine is open source and freely available. Terraform, by HashiCorp., is a multi-cloud tool. Troposphere builds your code, and then runs it to generate the cloud formation template.
Rooyen says, “So what’s the end result? What do we get when this is all done? Once again, through that story, we had input, input went into infrastructure, infrastructure stood up a cluster, and now we have this…We were able to provision those clusters in multiple clouds in a standard way and get the same output. The end result, which is a cluster. An end point or a platform that we can now deploy code to.”
Watch Rooyen’s complete presentation (below) to learn more details about the software used, hardware considerations, and important details about architecture.
Interested in speaking at MesosCon Asia on June 21 – 22? Submit your proposal by March 25, 2017. Submit now>>