作者
Maohua Zhu, Youwei Zhuo, Chao Wang, Wenguang Chen, Yuan Xie
发表日期
2018/2/15
期刊
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
出版商
IEEE
简介
Graphics processing units (GPUs) are widely used to accelerate data-intensive applications. To improve the performance of data-intensive applications, higher GPU memory bandwidth is desirable. Traditional graphics double data rate memories achieve higher bandwidth by increasing frequency, which leads to excessive power consumption. Recently, a new memory technology called high-bandwidth memory (HBM) based on 3-D die-stacking technology has been used in the latest generation of GPUs, which can provide both high bandwidth and low power consumption with in-package-stacked DRAM memory. However, the capacity of integrated in-package-stacked memory is limited (e.g., only 4 GB for the state-of-the-art HBM-enabled GPU, AMD Radeon Fury X). In this paper, we implement two representative data-intensive applications, convolutional neural network (CNN) and breadth-first search (BFS) on an …
引用总数
20172018201920202021202220232024127477107
学术搜索中的文章
M Zhu, Y Zhuo, C Wang, W Chen, Y Xie - IEEE Transactions on Very Large Scale Integration …, 2018