LinuxCon Europe and KVM Forum were recently held in Dusseldorf, and the conferences this year reflected the changes taking place in the Linux ecosystem generally – and particularly around KVM.
Now that it has been a few weeks since the events and I have had time to reflect on what took place, there are a few prominent messages that came through. In case you were not able to be there, I’ll share some of the salient points in my view that emerged from the events.
LinuxCon Europe was Big
First of all, the conference was big. LinuxCon Europe was quite a bit larger than LinuxCon Europe last year. With the array of co-located events, including KVM Forum, there were over 2,000 delegates at LinuxCon. This really shows the strength of interest in Linux and also in the related open source projects such as KVM.
The Importance of KVM and OpenStack to the Open Cloud
At LinuxCon Europe, Mike Kadera from Intel and I presented a session on KVM, OpenStack and the Open Cloud to a standing room-only crowd. We discussed the architecture of KVM (Kernel-based Virtualization Machine), and looked at the requirements for building open clouds. KVM is one of a number of possible hypervisors available with OpenStack, but it is the default – and most commonly used – hypervisor with OpenStack.
In our view, there are 3 key reasons KVM is so frequently used for OpenStack:
KVM excels at the choice criteria for a hypervisor, which are cost, scale and performance, security, and interoperability
There is a development affinity between KVM and OpenStack. There are both open source projects – and KVM is the default hypervisor for OpenStack development.
Because of the first two reasons, there is also a deployment affinity as well, which means that KVM is the best supported, easiest to deploy and most full-featured driver
In our presentation, Mike also shared Intel’s experience with KVM and OpenStack in terms of building the company’s internal IT cloud and talked about Intel’s goals for high utilization, as well as velocity resulting from automation and self service, along with zero business impact. He also reflected on the benefits the company has seen in performance and stability, and the lessons learned about implementing it and how management, enabled by OpenStack, becomes key.
We also looked at new and emerging developments in KVM and OpenStack. From a KVM point of view, we are expecting heterogeneous processor support spanning – ARM, POWER, System z and GPUs; network function virtualization; as well as additional performance improvements. OpenStack has also just released Juno, which adds automated provisioning and management of big data clusters using Hadoop and Spark.
New IDC White Paper on KVM
At the conference, a new IDC white paper “KVM – Open Source Virtualization for the Enterprise and Open Stack Clouds” authored by Gary Chen, was also released. Sponsored by the Open Virtualization Alliance (OVA), the white paper examines the current state of KVM, and identifies the critical elements to the future of KVM success.
“KVM plays a key role as the open source virtualization underpinning for both enterprise virtualization under traditional virtualization management, as well as in next-generation applications that are run on new cloud infrastructures such as OpenStack,” the white paper points out. The paper goes on to observe that KVM is rapidly increasing its market share and that “innovative projects like OpenStack continue to open more doors for KVM.”
Powerful New KVM Use Case –Network Function Virtualization
KVM Forum was also considerably bigger than last year, with lots of interaction and conversation.
KVM use cases are expanding beyond being simply virtualizing Linux servers. Network function virtualization is a cutting-edge use case for KVM that is being embraced by Telcos.
To shed light on this new trend, on the last day of KVM Forum, the OVA sponsored a KVM panel discussion. As the name implies, network function virtualization is about moving away from networking run on specific hardware to instead being virtualized on general-purpose hardware, thereby increasing flexibility and reducing cost.
In the past, communications networks have been built with specific routers, switches and hubs with the configuration of all the components being manual and complex. The idea now is to take that network function, put it into software running on standard hardware.
The discussion touched on the demands – in terms of latency, throughput, and packet jitter – that network function virtualization places on KVM when it is being run on general purpose hardware and used to support high data volume. There was a lively discussion about how to get fast communication between the virtual machines as well as issues such as performance and sharing memory, as attendees drilled down into how KVM could be applied in new ways.
Watch the video, below, for a replay of the panel discussion.
The Takeaway
As I look back in the rear view mirror at the conferences now, the key takeaway for me is that KVM use is rapidly expanding. From first being used to virtualize Linux servers, it has now evolved to form the basis of the open cloud, being used for emerging new uses such as network function virtualization, and running on many more processor architectures.
Thanks to the Open Virtualization Alliance for providing thought leadership and facilitating the conversation around these important issues at LinuxCon Europe through the LinuxCon breakout, IDC white paper, and KVM Forum panel discussion.
Adam Jollans leads the worldwide, cross-IBM Linux and open virtualization strategy for IBM. In this role he is responsible for developing and communicating the strategy for IBM’s Linux and KVM activities across IBM, including systems, software and services.
He is based in Hursley, England, following a two-year assignment to Somers, NY where he led the worldwide Linux marketing strategy for IBM Software Group. He has been involved with Linux since 1998, and prior to his U.S. assignment he led the European marketing activities for IBM Software on Linux.