[HTML][HTML] In-memory computing with emerging memory devices: Status and outlook

P Mannocci, M Farronato, N Lepri, L Cattaneo… - APL Machine …, 2023 - pubs.aip.org
In-memory computing (IMC) has emerged as a new computing paradigm able to alleviate or
suppress the memory bottleneck, which is the major concern for energy efficiency and …

An overview of efficient interconnection networks for deep neural network accelerators

SM Nabavinejad, M Baharloo, KC Chen… - IEEE Journal on …, 2020 - ieeexplore.ieee.org
Deep Neural Networks (DNNs) have shown significant advantages in many domains, such
as pattern recognition, prediction, and control optimization. The edge computing demand in …

MNSIM 2.0: A behavior-level modeling tool for memristor-based neuromorphic computing systems

Z Zhu, H Sun, K Qiu, L Xia, G Krishnan, G Dai… - Proceedings of the …, 2020 - dl.acm.org
Memristor based neuromorphic computing systems give alternative solutions to boost the
computing energy efficiency of Neural Network (NN) algorithms. Because of the large-scale …

SIAM: Chiplet-based scalable in-memory acceleration with mesh for deep neural networks

G Krishnan, SK Mandal, M Pannala… - ACM Transactions on …, 2021 - dl.acm.org
In-memory computing (IMC) on a monolithic chip for deep learning faces dramatic
challenges on area, yield, and on-chip interconnection cost due to the ever-increasing …

FLASH: F ast Neura l A rchitecture S earch with H ardware Optimization

G Li, SK Mandal, UY Ogras, R Marculescu - ACM Transactions on …, 2021 - dl.acm.org
Neural architecture search (NAS) is a promising technique to design efficient and high-
performance deep neural networks (DNNs). As the performance requirements of ML …

Mnsim 2.0: A behavior-level modeling tool for processing-in-memory architectures

Z Zhu, H Sun, T Xie, Y Zhu, G Dai, L Xia… - … on Computer-Aided …, 2023 - ieeexplore.ieee.org
In the age of artificial intelligence (AI), the huge data movements between memory and
computing units become the bottleneck of von Neumann architectures, ie, the “memory wall” …

A latency-optimized reconfigurable NoC for in-memory acceleration of DNNs

SK Mandal, G Krishnan, C Chakrabarti… - IEEE Journal on …, 2020 - ieeexplore.ieee.org
In-memory computing reduces latency and energy consumption of Deep Neural Networks
(DNNs) by reducing the number of off-chip memory accesses. However, crossbar-based in …

Impact of on-chip interconnect on in-memory acceleration of deep neural networks

G Krishnan, SK Mandal, C Chakrabarti, JS Seo… - ACM Journal on …, 2021 - dl.acm.org
With the widespread use of Deep Neural Networks (DNNs), machine learning algorithms
have evolved in two diverse directions—one with ever-increasing connection density for …

COIN: Communication-aware in-memory acceleration for graph convolutional networks

SK Mandal, G Krishnan, AA Goksoy… - IEEE Journal on …, 2022 - ieeexplore.ieee.org
Graph convolutional networks (GCNs) have shown remarkable learning capabilities when
processing graph-structured data found inherently in many application areas. GCNs …

Hybrid RRAM/SRAM in-memory computing for robust DNN acceleration

G Krishnan, Z Wang, I Yeo, L Yang… - … on Computer-Aided …, 2022 - ieeexplore.ieee.org
RRAM-based in-memory computing (IMC) effectively accelerates deep neural networks
(DNNs) and other machine learning algorithms. On the other hand, in the presence of RRAM …