Essentials of OpenStack Administration Part 2: The Problem With Conventional Data Centers

325

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

In part 1 of this series, we defined cloud computing and discussed different cloud services models and the needs of users and platform providers. This time we’ll discuss some of the challenges that conventional data centers face and why automation and virtualization, alone, cannot fully address these challenges. Part 3 will cover the fundamental components of clouds and existing cloud solutions.

For more on the basic tenets of cloud computing and a high-level look at OpenStack architecture, download the full sample chapter from The Linux Foundation’s online Essentials of OpenStack Administration course.

Conventional Data Centers

Conventional data centers are known for having a lot of hardware that is, by current standards at least, grossly underutilized. In addition to that, all that hardware (and the software that runs on it) is usually managed with relatively little automation.

Even though many things happen automatically these days (configuration deployment systems such as Puppet and Chef help here), the overall level of automation is typically not very high.

With conventional data centers it is very hard to find the right balance between capacity and utilization. This is complicated by the fact that many workloads do not fully utilize a modern server: for instance, some may use a lot of CPU but little memory, or a lot of disk IO but little CPU. Still, data centers will want enough capacity to handle spikes in load, but don’t want the cost of idle hardware

Whatever the case, it is clear that modern data centers require a lot of physical space, power, and cooling. The more efficient they run, the better for all parties involved.

mPXG1nmdkzEB0TDMlBvUDh5ZeHI6CzEsqoVDI0BR

Figure 1: In a conventional data center some servers may use a lot of CPU but little memory (MEM), or a lot of disk IO but little CPU.

A conventional data center may have several challenges to efficiency. Often there are several silos, or divisions of duties among teams. You may have a systems team that handles the ongoing maintenance of operating systems. A hardware team that does the physical and plant maintenance. Database and network teams. Perhaps even storage and backup teams as well. While this does allow for specialization in a particular area the efficiency of producing a new instance for the customer requirements is often low.

As well, a conventional data center tends to grow in an organic method. By that I mean, it may not be a well thought-out change. If it’s 2 a.m. and something needs doing, a person from that particular team may make the changes that they think are necessary. Without the proper documentation the other teams are then unaware of those changes and to figure it out in the future requires a lot of time, and energy, and resources which further lowers efficiency.

Manual Intervention

One of the problems arises when a data center needs to expand: new hardware is ordered, and, once it arrives, it’s installed and provisioned manually. Hardware is likely specialized, making it expensive. Provisioning processes are manual and, in turn, costly, slow, and inflexible.

“What is so bad about manual provisioning?” Think about it: network integration, monitoring, setting up high availability, billing… There is a lot to do, and some of it is not simple. These are things that are not hard to automate, but up until recently, this was hardly ever done.

Automation frameworks such as Puppet, Chef, JuJu, Crowbar, or Ansible can take care of a fair amount of the work in modern data centers and automate it. However, even though the frameworks exist, there are many tasks in a data center they cannot do or do not do well.

Virtualization

A platform provider needs automation, flexibility, efficiency, and speed, all at low cost. We have automation tools, so what is the missing piece? Virtualization!

Virtualization is not a new thing. It has been around for years, and many people have been using it extensively. Virtualization comes with the huge advantage of isolating the hardware from the software being used. Modern server hardware can be used much more efficiently when being combined with virtualization. Also, virtualization allows for a much higher level of automation than standard IT setups do.

bDtr1KvuvNJMSduZhCRKoF81ayc1M-n_31H9pR-v

Figure 2: Virtualization flexibility.

Virtualization and Automation

For instance, deploying a new system in a virtualized environment is fairly easy, because all it takes is creating a new Virtual Machine (VM). This helps us plan better when buying new hardware, preparing it, and integrating it into the platform provider’s data center. Typical virtualization environments such as VMWare, KVM on Linux, or Microsoft Hyper-V are good examples.

Yet, the situation is not ideal, because in standard virtualized environments, many things need to still be done by hand.

Customers will typically not be able to create new VMs on their own; they need to wait for the provider to do it for them. The infrastructure provider will first create storage (such as Ceph, SAN, or iSCSI LUN), attach it to the VM, and then perform OS installation and basic configuration.

In other words, standard virtualization is not enough to fulfill either providers’ or their customers’ needs. Enter cloud computing!

In Part 3 of this series, we’ll contrast what we’ve learned about conventional, un-automated infrastructure offerings with what happens in the cloud.

Read the other articles in this series: 

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!