Unifying primary cache, scratch, and register file memories in a throughput processor

M Gebhart, SW Keckler, B Khailany… - 2012 45th Annual …, 2012 - ieeexplore.ieee.org
M Gebhart, SW Keckler, B Khailany, R Krashinsky, WJ Dally
2012 45th Annual IEEE/ACM International Symposium on Microarchitecture, 2012ieeexplore.ieee.org
Modern throughput processors such as GPUs employ thousands of threads to drive high-
bandwidth, long-latency memory systems. These threads require substantial on-chip storage
for registers, cache, and scratchpad memory. Existing designs hard-partition this local
storage, fixing the capacities of these structures at design time. We evaluate modern GPU
workloads and find that they have widely varying capacity needs across these different
functions. Therefore, we propose a unified local memory which can dynamically change the …
Modern throughput processors such as GPUs employ thousands of threads to drive high-bandwidth, long-latency memory systems. These threads require substantial on-chip storage for registers, cache, and scratchpad memory. Existing designs hard-partition this local storage, fixing the capacities of these structures at design time. We evaluate modern GPU workloads and find that they have widely varying capacity needs across these different functions. Therefore, we propose a unified local memory which can dynamically change the partitioning among registers, cache, and scratchpad on a per-application basis. The tuning that this flexibility enables improves both performance and energy consumption, and broadens the scope of applications that can be efficiently executed on GPUs. Compared to a hard-partitioned design, we show that unified local memory provides a performance benefit as high as 71% along with an energy reduction up to 33%.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果