7 Tips to Help Your Company Succeed in the Cloud

400

The growing footprints of public cloud providers are changing the way we view deploying and delivering digital services. This transformation was brought to light in a single sentence in an article  about transformations and change at Etsy: “The emphasis on go-it-alone craftsmanship meant Etsy managed its own data centers, instead of using more efficient options like Amazon Web Services or Google Cloud.”

That statement is a reflection of the state of our industry: companies and investors are looking to improve the focus on delivering and developing a product and less time and investment on maintaining infrastructure. The needs of our products have not changed – but how we create and maintain them has. As Linux and open source professionals of all types, we are at the center of this revolution. Not only is Linux the “foundation” for most public cloud providers; studies show a steady dominance of Linux deployments in the cloud and the growth of container technologies such as Docker further grow the number of active Linux installs.

The Linux and Dice Open Source Jobs Report echoes the importance of open source in companies today, with 60 percent looking for full-time professionals with open source experience. Plus, nearly half (47 percent) of hiring managers said they’ll pay for certifications just to bring employees up to speed on open source projects.

In short, Linux professionals are uniquely positioned to help the success and operation of public cloud deployments.  We can help organizations succeed in public cloud deployments. Here are a few things that I have learned over the course of several cloud migrations for Linux professionals:

1. Understand systems in context and scope

We have to realize that in more modern environments, the way that we deploy and administer software can be widely varied.  Some deployments to a public cloud can follow a traditional pattern of “standing up a server or two” and focusing on systems administration (upgrade schedules, patch management, etc.). In this case, not much changes — and not much really has to change. On the other hand, this approach can quickly introduce issues in scaling technology — and personnel. As systems grow from tens to hundreds, repeatability and manageability is a primary concern. Adding container deployments means that approaching systems administration through automation is important. “Success” in the cloud largely depends on the scope and context of what we deploy.

2. Learn how to effectively use source control

Being able to repeat tasks from source has long been noted as an important tool for infrastructure teams. The ability to expand and shrink a compute footprint in the public cloud requires that we can repeat the creation of our instances and deployment of software quickly and automatically. Source control systems (such as git, SVN, etc.) give us the capabilities for repeating and controlling how we deploy systems.  Effective use of source control to keep and promote changes allows us to create easily repeatable systems.

3. Master a programming language

As the public cloud is often reliant on programming interfaces (APIs) for provision and management, learning how to effectively use those APIs is an important skill.  Most public cloud providers publish libraries for accessing and interacting with infrastructure – typically with scripting languages such as Ruby and Python, or even compiled languages such as Go, Java and C languages.  It may be preferable to choose a language that compliments other tooling, but know that deep mastery of one language will teach concepts that can apply to other languages and runtimes. (Note that traditional “shell scripting” languages do not interact at API levels – making things more difficult than they might need to be.)

4. Brush up on basic monitoring and troubleshooting

As the number of Linux installations grows, so does the complexity of monitoring and troubleshooting issues. Having a good handle on troubleshooting techniques (such as the USE Methodology) and some basic tooling can help you diagnose performance issues. Public cloud vendors will often have some basic monitoring in place — such as network and CPU statistics, but detailed information about the state of the operating system is typically left to the user.

5. Create repeatable systems

Being able to recreate the state of systems from source control or templates is key to taking advantage of the pricing models of the public cloud — paying for only what you might need as the usage patterns of your systems change.  There are a number of approaches to configuration management — the key to selection is choosing a tool that meets the needs of your infrastructure and development teams.

6. Avoid restrictive licensing policies

The great advantage of the public cloud is being able to use systems and software that allow you to grow a computing footprint as needed.  Choosing open source software allows you to deploy software to as many instances as needed without being concerned about licensing cost; many vendors that offer commercial extensions or support to open source tools will also offer flexible license policies that allow you to autoscale or expand that compute footprint on reasonable license terms. Inflexible software licensing and installation can make cloud infrastructures significantly more difficult to manage over time.

7. Develop an approach to learning from and mentoring others

Adopting new technologies and methods of managing systems can often be challenging for individuals and teams. In the cloud migrations that I have been a part of, there have always been tensions and anxiety caused by the unfamiliar. Make sure that team members feel valued and that moving to new methods of operation is a learning process for everyone.  While the skills and experience of Linux professionals is invaluable in cloud deployments, the exact type of skills required and the focus of day-to-day work will certainly change.

Download the full 2017 Open Source Jobs Report now.