Non-Volatile Memory Management
Byte-addressable non-volatile memory (NVM) is an emerging type of memory that shares features with both DRAM and SSDs. Like on SSDs, data stored on NVM is persistent, that is the system can be powered down or rebooted without losing data. But unlike SSDs, NVM is byte-addressable and fast enough so that it can be directly connected to the memory controller. The CPU can thus access NVM just like contemporary DRAM, however, at the cost of higher latency and lower bandwidth.
There has been a lot of research into NVM hardware in recent years. In 2019, Intel made their implementation, Intel Optane Persistent Memory, commercially available, with modules providing capacities of up to 512 GiB.
The high capacity per module combined with its lower price than DRAM makes NVM an attractive target for memory-intensive applications such as in-memory key-value stores and graph databases. For workloads exceeding the amount of DRAM in a system, these applications traditionally had to resort to slow swapping to secondary storage or had to be reengineered to distribute the workloads across multiple machines.
We are researching efficient use of NVM in applications as an extension of main memory. We approach the subject from three angles: OS-driven transparent use of NVM, explicit use of NVM by NVM-aware applications, and simulation to better understand memory usage patterns.
Transparent Use of NVM
With hardware or operating system support for transparent use of NVM, applications can benefit from NVM without having to be modified. For this purpose, Intel introduced a hardware solution called Memory Mode. Memory Mode presents the larger NVM as regular main memory to the system and transparently uses the smaller DRAM as a fast cache to hide NVM's latency and bandwith deficiencies.
However, as Memory Mode works completely transparent to the operating system and applications, it offers little flexibility. There is no way for applications or the operating system to ensure that certain parts of address spaces
are always backed by high-performance DRAM. As Memory Mode has no concept of processes, it is also not capable of ensuring fairness between applications or prioritizing time-critical processes. We are therefore investigating similar transparent mechanisms as an extension to the operating system's memory management that allows:
- customizable allocation of DRAM and NVM
- fair allocation of DRAM and NVM between applications
- prioritization of critical processes
- direct access to infrequently-used data in NVM without copying to DRAM
NVM-aware Applications
While transparent use of NVM is very attractive, we want to determine if explicit use of NVM by applications can yield additional benefits. At the application level, a library could have more insight into the workload and distribute memory between NVM and DRAM accordingly. Additionally, the application may provide hints about future access patterns, which allows further optimization. We are therefore researching abstractions for hybrid NVM/DRAM systems such as NVM-aware hybrid data structures that give applications a simple way to make use of NVM for their bulk data.
Nevertheless, we expect transparent hybrid NVM/DRAM memory management to be useful even for NVM-aware applications. When porting existing applications or designing new applications from scratch, the developer could adjust only the most critical data structures for manual NVM use and allow the operating system to transparently manage the remaining parts of the address space.
In addition to optimized data placement at runtime, hybrid NVM/DRAM data structures can also provide a path towards NVM-based persistence. Traditional applications need to serialize their state and transfer it to secondary storage. In contrast, a hybrid NVM/DRAM data structure is already managing data on persistent media without serialization and thus could be leveraged for implicit persistence.
Simulation
In any case, understanding access patterns is critical for designing high-performance hybrid NVM/DRAM systems. The performance of such systems depends greatly on finding good policies for data placement.
We are leveraging Simutrace to collect detailed memory access traces, which enable us to explore the limits of hybrid NVM/DRAM systems and to design and evaluate data placement policies.
Contact: Lukas Werling