Persistent Memory Usage in Linux

1469

In most cases, when a machine crashes or fails, we lose whatever we had loaded into memory, which for some applications can result in quite a bit of time and effort to recover when the system comes back online. At LinuxCon Europe, Maciej Maciejewski, Senior Software Engineer at Intel, talked about how persistent memory can be used to retain its contents after a power failure.

Maciejewski started by talking about how this works from a hardware perspective, since the hardware to do this has been around for some time. The idea is that you can take a dual in-line memory module (DIMM), which normally holds DRAM memory. These DRAM-like chips can also retain data across power cycles, so if your machine goes out, crashes, whatever happens, you take this non-volatile DIMM out, and it contains all of your data. It doesn’t lose anything. At Intel, they are currently working on a product that will be able to get up to three terabytes of non-volatile memory.

The hardware is evolving, but it’s only part of the solution. Maciejewski explained how the software needs be easy to use if the product is going to be successful. A couple of years ago, the Storage Networking Industry Association (SNIA) formed a working group around non volatile programming, which came up with a standardized programming model that forms the base of the NVM Library that Maciejewski discussed. The NVM Library is actually a collection of libraries that can be used to develop software to take advantage of persistent memory. 

  • libpmem – Basic persistency handling
  • libpmemblk – Block access to persistent memory
  • libpmemlog – Log file on persistent memory (append-mostly) 
  • libpmemobj – Transactional Object Store on persistent memory 
  • libpmempool – Pool management utilities
  • librpmem – Replication

One real-world example of an application that would benefit from persistent memory usage is an in-memory database, like Redis. Maciejewski mentioned that the obvious advantage is startup time. When you put all of the data into persistent memory and work with it directly, you aren’t working with a memory copy or loading from the disk into memory, so the start up is instant. With large datasets in the multiple terabyte range, going from hours to an instant is a big advantage.

Persistent memory usage is still a work in progress that will require time for adoption, but Maciejewski hopes that “this will revolutionize the computing that we have right now.”

To hear all the details and see the performance charts from the Redis example, watch the complete presentation below.

Interested in speaking at Open Source Summit North America on September 11 – 13? Submit your proposal by May 6, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!