T Andrulis, JS Emer, V Sze - … of the 50th Annual International Symposium …, 2023 - dl.acm.org
Processing-In-Memory (PIM) accelerators have the potential to efficiently run Deep Neural Network (DNN) inference by reducing costly data movement and by using resistive RAM …
S Kang, G Park, S Kim, S Kim, D Han… - IEEE Journal on …, 2021 - ieeexplore.ieee.org
This paper presents a detailed overview of sparsity exploitation in deep neural network (DNN) accelerators. Despite the algorithmic advancements which drove DNNs to become …
Computing-in-memory (CIM) is a promising architecture for energy-efficient neural network (NN) processors. Several CIM macros have demonstrated high energy efficiency, while CIM …
The high computational complexity and a large number of parameters of deep neural networks (DNNs) become the most intensive burden of deep learning hardware design …
S Yang, W Chen, X Zhang, S He, Y Yin… - Proceedings of the ACM …, 2021 - dl.acm.org
Emergent ReRAM-based accelerators support in-memory computation to accelerate deep neural network (DNN) inference. Weight matrix pruning of DNNs is a widely used technique …
Compute-in-Memory (CIM) implemented with Resistive-Random-Access-Memory (RRAM) crossbars is a promising approach for Deep Neural Network (DNN) acceleration. As the …
In the 1800s, Charles Babbage envisioned computers as analog devices. However, it was not until 150 years later that a Mechanical Analog Computer was constructed for the US …
Resistive Random-Access-Memory (ReRAM) crossbar is one of the most promising neural network accelerators, thanks to its in-memory and in-situ analog computing abilities for …
With the rapid progress of deep neural network (DNN) applications on memristive platforms, there has been a growing interest in the acceleration and compression of memristive …