Last-level caches (LLCs) are large structures with significant power requirements. They can be quite inefficient. On average, a cache block in a 2MB LRU-managed LLC is dead 86% of …
The disparity between last-level cache and memory latencies motivates the search for efficient cache management policies. Recent work in predicting reuse of cache blocks …
While hardware technology has undergone major advancements over the past decade, transaction processing systems have remained largely unchanged. The number of cores on …
H Liu, M Ferdman, J Huh… - 2008 41st IEEE/ACM …, 2008 - ieeexplore.ieee.org
Data caches in general-purpose microprocessors often contain mostly dead blocks and are thus used inefficiently. To improve cache efficiency, dead blocks should be identified and …
A Moshovos - … Symposium on Computer Architecture (ISCA'05), 2005 - ieeexplore.ieee.org
It has been shown that many requests miss in all remote nodes in shared memory multiprocessors. We are motivated by the observation that this behavior extends to much …
Coherent read misses in shared-memory multiprocessors account for a substantial fraction of execution time in many important scientific and commercial workloads. We propose …
DA Jiménez, E Teran - Proceedings of the 50th Annual IEEE/ACM …, 2017 - dl.acm.org
The disparity between last-level cache and memory latencies motivates the search for efficient cache management policies. Recent work in predicting reuse of cache blocks …
Scaling the performance of shared-everything transaction processing systems to highly- parallel multicore hardware remains a challenge for database system designers. Recent …
C Isen, L John - Proceedings of the 42nd Annual IEEE/ACM …, 2009 - dl.acm.org
Dynamic Random Access Memory (DRAM) is used as the bulk of the main memory in most computing systems and its energy and power consumption has become a first-class design …