Stageless Deployment Pipelines: How Containers Change the Way We Build and Test Software

596

Large web services have long realized the benefits of microservices for scaling both applications and development. Most dev teams are now building microservices on containers but they haven’t updated their deployment pipeline for the new paradigm. They still use a classic build -> stage -> test -> deploy model. It’s classic and entrenched and it’s just a bad way to release code.

First the bad, staging servers get in the way of continuous delivery

Most development teams will recognize this everyday reality. You work in a sprint to get a number of changes completed. You open a new branch for each feature you work on and then you send it on for a pull request to a master, staging, or developer branch. The staging branch (or worse, your master branch) is then deployed to a staging server with all the changes from developers in the last few days (or two weeks). Then it’s pushed to  staging.

But oh no, there is a problem. The application doesn’t work, integration tests fail, there’s a bug, it’s not stable, or maybe you just sent the staging url to the marketing team and they don’t like the way a design was implemented. Now you need to get someone to go into the staging/master branch that’s been poisoned with this change. They probably have to rebase, then remerge a bunch of the pull requests minus the offending code and then push it back into staging. Assuming all goes well the entire team has only lost a day.

In your next retrospective the team talks about better controls and testing before things reach staging but no one stops to ask the question why they’re using this weird staging methodology anyway when we live in a world with containers and microservices. Staging servers were originally built for monolithic apps and were only meant to provide simple smoke tests to see that code didn’t just run on a developer’s local machine, but would at least run in some other server somewhere. Even though they are used for full application testing with microservices it’s not an efficient way to test changes.

Here’s the end result

  1. Batched changes happen in a slow cadence

  2. One person’s code only releases when everyone’s code is ready

  3. If a bug is found, the branch is now poisoned and you have to pull apart the merges to fix it (often requiring a crazy git cheatsheet to figure it out).

  4. Business owners don’t get to see code until it’s really too late to make changes

How should it work? Look at your production infrastructure

Your production infrastructure is probably ephemeral; built up of on demand instances in Amazon/Azure/Google Cloud.  Every developer should be able to spin up an instance on demand for their changes, send it to QA, iterate, etc before sending it on to release.

Instead of thinking about staging servers we have test environments which follow along the classic git branch workflow. Each test environment can bring together all the interconnected microservices for much richer testing conditions.

Following this model changes the whole feedback and iteration loop to stay within the feature branch, never moving onto merge and production until all stakeholders are happy. Further, you can actually test each image against the different versions of microservices.

To accomplish this DevOps teams can build lots of scripts, logic and workflows that they have to maintain or there’s tools out there that already build the stuff into a hosted CI as part of the container lifecycle management.

The advantages of test on demand iteration model

Once your test structure becomes untethered from a stagnant staging model dev teams can actually produce code faster. Instead of waiting for DevOps or approval to get the changes onto a staging server where stakeholders can approve the code goes straight into an environment where they can share and get feedback.

It also allows a much deeper level of testing that traditional CI by bringing all the connected microservices into a composition. You can actually write unit tests that rely on interconnected services. In this paradigm, integration testing allows for a greater variety of tests and each testing service essentially becomes it’s own microservice.

Once iteration is complete the code should be ready to go straight into Master (after a rebase) eliminating the group exercise that normally takes place around staging. Testing and iteration happens at the feature level, and then can be deployed at the feature level.

That means no more staging.

About Dan Garfield and Eran Barlev

Dan Garfield is a full-stack web developer with Codefresh, a container lifecycle management platform that provides developer teams of any size advanced pipelines and testing built specifically for containers like Docker. Check them out at https://codefresh.io

Eran is an ISTQB Certified Tester with over 20 years of experience as a software engineer working primarily in compiled languages. He is the Founder of the Canadian Software Testing Board (www.cstb.ca) and an active member of the ISTQB (www.istqb.org – International Software Testing Qualifications Board).