Modern computing systems are overwhelmingly designed to move data to computation. This design choice goes directly against at least three key trends in computing that cause …
The next generation of supercomputers will break the exascale barrier. Soon we will have systems capable of at least one quintillion (billion billion) floating-point operations per …
Convolution neural networks (CNNs) are the heart of deep learning applications. Recent works PRIME [1] and ISAAC [2] demonstrated the promise of using resistive random access …
Processing-in-memory (PIM) is a promising solution to address the" memory wall" challenges for future computer systems. Prior proposed PIM architectures put additional …
Many modern workloads, such as neural networks, databases, and graph processing, are fundamentally memory-bound. For such workloads, the data movement between main …
S Li, D Niu, KT Malladi, H Zheng, B Brennan… - Proceedings of the 50th …, 2017 - dl.acm.org
Data movement between the processing units and the memory in traditional von Neumann architecture is creating the" memory wall" problem. To bridge the gap, two approaches, the …
S Li, C Xu, Q Zou, J Zhao, Y Lu, Y Xie - Proceedings of the 53rd Annual …, 2016 - dl.acm.org
Processing-in-memory (PIM) provides high bandwidth, massive parallelism, and high energy efficiency by implementing computations in main memory, therefore eliminating the …
M He, C Song, I Kim, C Jeong, S Kim… - 2020 53rd Annual …, 2020 - ieeexplore.ieee.org
Advances in machine learning (ML) have ignited hardware innovations for efficient execution of the ML models many of which are memory-bound (eg, long short-term …
S Jain, A Ranjan, K Roy… - IEEE Transactions on …, 2017 - ieeexplore.ieee.org
In-memory computing is a promising approach to addressing the processor-memory data transfer bottleneck in computing systems. We propose spin-transfer torque compute-in …