Virtualization or Containers? Consider the Application

258

container houseRarely does the juxtaposition between the push of innovation and the pull of caution become so evident than in the adoption of new technologies. Time after time, technological innovators introduce new software and hardware into the market that are eagerly consumed by early adopters and retail consumers. And yet it is businesses that tend to hold back, weighing their options before deciding how and if they will join the ranks of users for such new technologies.

Virtualization, the capability to abstract hardware away from the operating system and the applications that run within the OS, has been around for a long time, relatively speaking, in the technology world. Hypervisors such as KVM power a wide range of virtualization tools such as oVirt, RHEV, RDO, Proxmox, and OpenStack. Indeed, virtualization, once approached with caution like any other new technology, is being actively embraced by organizations as an increasingly desired solution to stretch out hardware resources and more nimbly manage machines and the business-critical applications that run on them. With more than 250 members and supporting companies, the Open Virtualization Alliance, a Linux Foundation Collaborative Project, is a testament to the broad adoption of KVM and virtualization, in general.

Today, the newcomer on the block is container technology. Containers abstract the operating system itself; allowing multiple user namespaces to be used with a single kernel. The end result of such an abstraction means that applications and services can be run as separate instances without relying on the operating system itself. Inside a container is the application itself and any specialized libraries it might need to run, and little else.

“Newcomer” is a bit tongue-in-cheek. Containers have been around quite a while as well. The recent popularity of containers has been brought about by the introduction of Docker — the now-ubiquitous container technology that has improved upon existing container shortcomings and container portability. Indeed, that portability has become a key feature of Docker containers–create a container with Docker and you can put that container on any other Docker host machine. This feature, coupled with the advantage of being able to develop applications without the attendant need to worry about the operating system, has made containers very attractive to developers and system administrators alike.

When Worlds Collide?

Having had the opportunity to work with two open source projects that each dealt with virtualization and containers, respectively, it’s been easier to see the advantages and disadvantages of each approach. With
oVirt, KVM-based virtual machines are managed all the way up to the datacenter level, giving administrators
flexibility in managing virtual networks, clusters, and pools. Project Atomic provides a robust set of container
management tools on just-enough operating system, delivering maximum efficiency for developers and administrators alike.

As projects like Atomic continue to make strides within the open source ecosystem, it inevitably raises the question: Is virtualization’s fate sealed by the rise of containers? Dramatic hyperbole aside, it’s a question that has merit, because businesses are looking to adopt the best technology for their needs. Wouldn’t they run the risk of adopting some technology that was on its way to becoming obsolete?

Such concerns are based on the assumption that any new technology must automatically supplant any old technology, which is not always the best assumption to make. While the advent of cell phones in the 90s would nearly completely replace the standard payphone, and reduce the deployment of home-based landlines significantly, not every new form of innovation is the cause of the demise of that which came before it.

Ships and trains were not completely removed from existence when commercial airplanes took off. These forms of transportation are still around. But one thing is also true: the use of trains and ships were radically changed by air transportation. Passenger train use, particularly in North America, fell dramatically, as did the use of passenger ships as a destination conveyance. Trains are now used for freight transport, or passenger transport on routes short enough to match the efficiencies of jet airliners. Freighter ships were unaffected by air transport, but all but a few ships are now vacation destinations, rather than a modes of tranoceanic modes of transportation.

This kind of transformation is what can happen whenever older technology meets its descendant. As people begin to rapidly adopt new technology, the user of older technology will adapt to a new use. If it can’t, then, like the payphones of lore, it may fade away.

So, is virtualization about to be disconnected by containers?

Two Sides… One Coin

The simple answer is… no.

The flexibility and utility of a virtualized machine in the datacenter is simply too great to be dismissed. Putting aside for a moment the relative immaturity of container use in the datacenter–because, to be fair, that will fade with more IT experience–the fact is that virtualization is ready to deploy applications now. Any applications or services that can run on a physical machine can be run on a virtual machine, with essentially no changes to the application or the configuration of the machine.

We see this ready-to-go status affecting the adoption of a not-so-old variation of virtual technology: cloud computing. Cloud computing depends on virtualization as it manages virtual machines on an elastic basis. But to take advantage of that elasticity–automated control of virtual resources by the apps themselves–applications have to be re-coded to connect with a cloud platform’s API. If this does not happen, then cloud computing is no different than running a virtual datacenter. It’s the elasticity that makes cloud truly cloud computing. But even today, developers and administrators are finding that the work to gain that elasticity may not be worth the effort.

And the decision point between virtual datacenters and cloud computing is very analogous to that of virtualization and containers. Cloud is not better than virtual datacenters, and vice versa. Nor is container better than virtualization. They are different, and through their differences, they are best used under very specific use cases.

Right now, conventional wisdom seems to be putting containers squarely as the foundation technology for platform-as-a-service (PaaS) software, like OpenShift, and virtualization as an infrastructure-as-a-service (IaaS) technology. That’s an acceptable way to classify container and virtualization use, but that might be too limiting. Containers can have a place in IaaS, for instance, as tools like OpenShift Nova and XenServer support containers.

A broader use case scenario might be the provenance of your applications. Existing, legacy applications might be a better fit for virtualization. You don’t have to re-engineer them for the virtual datacenter, and while legacy applications do need to be altered for cloud platforms, it is not a significant alteration.

Newer applications, on the other hand, could have a place with containers. With services and applications that are just being put together, the very DevOps-friendly environment of container space would tend to be a better home.

Even this is not an absolute guideline: older applications can be containerized, and newer applications can run just fine on virtual stacks. But as a general rule, building your new services on containers will take full advantage of this new technology, while keeping your existing applications on virtual machines will save your time and resources now.

Technology will move forward, as virtualization and containers have both demonstrated with their respective inventions. But moving forward does not mean that anything has to be left behind.