Bringing an end to hypervisor vs bare metal debate

9601

The debate whether hypervisors are faster than bare metal resurfaced at the vmworld 2019 conference. VMware has long maintained that hypervisors have many advantages over Bare Metal, including efficiency and cost.

“Hypervisors hosting multiple virtual machines do offer some advantages over bare metal servers. Hypervisors allow virtual machines to be created instantly, providing more resources as needed for dynamic workloads. It is much harder to provide an additional physical server when it is needed. Hypervisors also allow for more utilization of a physical server, since it is able to run several virtual machines on one physical machine’s resources. Running several virtual machines on one physical machine is more cost and energy-efficient than running multiple underutilized physical machines for the same task.” 

Are these claims true?

Rob Hirschfeld, CEO and co-founder of RackN agrees with VMware’s claims and said that the matter of the fact is that the machines people buy today to run in datacenters are hypervisor optimized. As a result, these machines are more efficient with a hypervisor than running a single operating system.

“The CPUs are designed to run multiple VMs. Taking into account the resource constraints, the operating systems are not designed to run gigantic machines,” said Hirschfeld.

The way technologies have evolved over the years stopped assuming one giant server running just one operating system. “It’s antithetical to the way we’ve bought machines for the last 10 years because of virtualization.”

However, not everyone is buying giant servers and not everyone needs virtualization. There are many use-cases where users need smaller machines with fewer processors and moderate memory. These machines are more efficient with a single operating system than a hypervisor. Edge environment is a perfect example of such cost-effective commodity servers.

From that perspective, depending on how you set-up your infrastructure, bare metal is going to be a better performer. It might also be more cost-effective and simpler to manage. “There might be many other benefits too; it’s not an A or B question,” said Hirschfeld.

That’s not going to stop people from going the hypervisor route with traditional infrastructure optimized for VMs. Hirschfeld’s advice to such users is to not buy a terabyte of RAM and as many CPUs as they can get with fiber channel SANs and stuff like that. Instead, find small cheap machines and buy a lot of them. Datacenter design is always a balancing act of how you want to manage your infrastructure and what you want that infrastructure to do.

Hirschfeld believes that we are getting out of the era of general-purpose computing:  this is the hyper-converged concept where users buy one type of machine which will solve all of their problems. “That’s an expensive way to solve the problem as it also assumes that you’re going to virtualize everything,” he said.

What’s wrong with virtualizing everything? “It’s very hard to install VMware. It takes a lot of knowledge and fiddling to get it right,” he said.

In conclusion

The world is moving towards containers. A lot of containers. Users can run Linux containers on bare metal infrastructure with great ease and efficiency. Hirschfeld clearly sees bare metal as an ideal route for edge workloads. However, he also sees the need for virtualization. It’s a balancing act. The debate between virtualization vs bare metal sounds like a religious crusade, which it’s not. It’s all about using the right tool for the right job.

“People should not think that they should not use VMs or that they should not virtualize things,” Hirschfeld said, “In our experience, there are good reasons for a lot of pieces.  Sometimes even a suboptimal solution, if it feels right to you, is a good solution.”