System, method, and computer program product for improving memory systems

MS Smith - US Patent 9,432,298, 2016 - Google Patents
H01L25/18—Assemblies consisting of a plurality of individual semiconductor or other solid
state devices; Multistep manufacturing processes thereof the devices being of types …

Nimble page management for tiered memory systems

Z Yan, D Lustig, D Nellans… - Proceedings of the Twenty …, 2019 - dl.acm.org
Software-controlled heterogeneous memory systems have the potential to increase the
performance and cost efficiency of computing systems. However they can only deliver on …

Fundamental latency trade-off in architecting dram caches: Outperforming impractical sram-tags with a simple and practical design

MK Qureshi, GH Loh - 2012 45th Annual IEEE/ACM …, 2012 - ieeexplore.ieee.org
This paper analyzes the design trade-offs in architecting large-scale DRAM caches. Prior
research, including the recent work from Loh and Hill, have organized DRAM caches similar …

Die-stacked dram caches for servers: Hit ratio, latency, or bandwidth? have it all with footprint cache

D Jevdjic, S Volos, B Falsafi - ACM SIGARCH Computer Architecture …, 2013 - dl.acm.org
Recent research advocates using large die-stacked DRAM caches to break the memory
bandwidth wall. Existing DRAM cache designs fall into one of two categories---block-based …

Unison cache: A scalable and effective die-stacked DRAM cache

D Jevdjic, GH Loh, C Kaynak… - 2014 47th Annual IEEE …, 2014 - ieeexplore.ieee.org
Recent research advocates large die-stacked DRAM caches in many core servers to break
the memory latency and bandwidth wall. To realize their full potential, die-stacked DRAM …

Cameo: A two-level memory organization with capacity of main memory and flexibility of hardware-managed cache

CC Chou, A Jaleel, MK Qureshi - 2014 47th Annual IEEE/ACM …, 2014 - ieeexplore.ieee.org
This paper analyzes the trade-offs in architecting stacked DRAM either as part of main
memory or as a hardware-managed cache. Using stacked DRAM as part of main memory …

Row buffer locality aware caching policies for hybrid memories

HB Yoon, J Meza, R Ausavarungnirun… - 2012 IEEE 30th …, 2012 - ieeexplore.ieee.org
Phase change memory (PCM) is a promising technology that can offer higher capacity than
DRAM. Unfortunately, PCM's access latency and energy are higher than DRAM's and its …

A survey of techniques for architecting DRAM caches

S Mittal, JS Vetter - IEEE Transactions on Parallel and …, 2015 - ieeexplore.ieee.org
Recent trends of increasing core-count and memory/bandwidth-wall have led to major
overhauls in chip architecture. In face of increasing cache capacity demands, researchers …

Large-reach memory management unit caches

A Bhattacharjee - Proceedings of the 46th Annual IEEE/ACM …, 2013 - dl.acm.org
Within the ever-important memory hierarchy, little research is devoted to Memory
Management Unit (MMU) caches, implemented in modern processors to accelerate …

Enabling efficient and scalable hybrid memories using fine-granularity DRAM cache management

J Meza, J Chang, HB Yoon, O Mutlu… - IEEE Computer …, 2012 - ieeexplore.ieee.org
Hybrid main memories composed of DRAM as a cache to scalable non-volatile memories
such as phase-change memory (PCM) can provide much larger storage capacity than …