Put Wind into your Deployments with Kubernetes and Helm

598

I’m a Software Engineer. Every day, I come into work and write code. That’s what I’m paid to do. As I write my code, I need to be confident that it’s of the highest quality. I can test it locally, but anyone who’s ever heard the words, “…but it works on my machine,” knows that’s not enough. There are huge differences between my local environment and my company’s production systems, both in terms of scale and integration with other components. Back in the day, production systems were complex, and setting them up required a deep knowledge of the underlying systems and infrastructure. To get a production-like environment to test my code, I would have to open a ticket with my IT department and wait for them to get to it and provision a new server (whether physical or virtual). This was a process that took a few days at best. That used to be OK when release cycles were several months apart. Today, it’s completely unacceptable.

Instant Environments Have Arrived

We all know this, it’s almost a cliché. Customers today will not wait months, weeks, or even days for urgent fixes and new features. They expect them almost instantly. Competition is fierce and if you snooze you lose. You must release fast or die! This is the reality of the software industry today. Everything is software and software needs to be continuously tested and updated.

To keep up with the growing velocity of release cycles and provide quality software at near-real-time speed with bug fixes, new features and security updates, developers need the tooling to support quick and accurate verification of their work. This need is met by virtualization and container technologies that put on-demand development environments at developers’ fingertips. Today, a developer can easily spin up a production-like Linux box on their own computer to run their application and test their work almost effortlessly.

The K8s Solution for O11n

Over the past few years, the evolution of orchestration (o11n) tools has made it incredibly easy to deploy containerized applications to remote production-like environments while seamlessly taking care of developer overhead such as security, networking, isolation, scaling and healing.

Kubernetes is one of the most popular tools and has quickly become the leading orchestration platform for containerized applications. As an open-source tool, it has one of the biggest developer communities in the world. With many companies using Kubernetes in production, it has proven mileage and continues to lead the container orchestration pack.

Much of Kubernetes’ popularity comes from the ease in which you can spin up a cluster, deploy your applications to it and scale it to your needs. It’s really DIY-friendly and you won’t need any system or IT engineers to support your development efforts.

Once your cluster is ready, anyone can deploy an application to it using a simple set of endpoints provided by the Kubernetes API.

In the following sections, I’ll show you how easy it can be to run and test your code on a production-like environment.

An Effective Daily Routine with Kubernetes

The illustration below suggests an effective flow that, as a developer, you could adopt as your daily routine. It assumes that you have a production-like Kubernetes cluster set up as your development or staging environment.

RYGGOfcmk9BRStLSMhpeTifgN4u-jiOyZKRFZXf9

Optimizing Deployment to Kubernetes with a Helm Repository

Several tools have evolved to help you integrate your development with Kubernetes letting you easily deploy changes to your cluster. One of the most popular tools is Helm, the Kubernetes packages manager. Helm gives you an easy way to manage the settings and configurations needed for your applications in Kubernetes. It also provides a way to specify all the pieces of your application as a single package and distribute in an easy-to-use format.

But things get really interesting when you use a repository manager that supports Helm. A Kubernetes Helm repository adds capabilities like security and access control over your Helm charts and a REST API to automate the use of Helm charts when deploying your application to Kubernetes. The more advanced repository managers even offer features such as high availability and massively scalable storage making them ready for use in enterprise-grade systems.

Other Players in the Field

Helm is not the only tool you can use to deploy an application to Kubernetes. There are other alternatives, some of them even integrate with IDEs and CI/CD tools. To help you decide which tool best meets your needs you can read this post that compares: Draft vs Gitkube vs Helm vs Ksonnet vs Metaparticle vs Skaffold. There are many other tools the help setup and integrate with Kubernetes. You can see a flat list in this Kubernetes tools repository.

Your One Takeaway

Several container orchestration tools are available; however, the ease with which Kubernetes lets you spin up a cluster and deploy your applications to it has fueled its dominance in the market. The combination of Kubernetes and a tool like Helm puts production-like systems at the hands of every developer. With the ability to spin up a Kubernetes cluster on virtually any development machine, developers can easily implement a fully automated CI/CD pipeline and deliver bug fixes, security patches and new features with the confidence that they will run as expected when deployed to production. If there’s one takeaway you should get from this article it’s that even if you’re already releasing fast, with Kubernetes and Helm, your development cycles can get even shorter and be more reliable letting you release better quality code faster.

Eldad Assis, DevOps Architect, JFrog

Eldad Assis has been working on infrastructure for years, and loving it! DevOps architect and advocate. Automation everywhere!

For similar topics on Kubernetes and Helm, consider attending KubeCon + CloudNativeCon EU, May 2-4, 2018 in Copenhagen, Denmark.