Compute-in-memory chips for deep learning: Recent trends and prospects

S Yu, H Jiang, S Huang, X Peng… - IEEE circuits and systems …, 2021 - ieeexplore.ieee.org
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …

DNN+ NeuroSim V2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training

X Peng, S Huang, H Jiang, A Lu… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
DNN+ NeuroSim is an integrated framework to benchmark compute-in-memory (CIM)
accelerators for deep neural networks, with hierarchical design options from device-level, to …

Review of neuromorphic computing based on NAND flash memory

ST Lee, JH Lee - Nanoscale Horizons, 2024 - pubs.rsc.org
The proliferation of data has facilitated global accessibility, which demands escalating
amounts of power for data storage and processing purposes. In recent years, there has been …

Two-way transpose multibit 6T SRAM computing-in-memory macro for inference-training AI edge chips

JW Su, X Si, YC Chou, TW Chang… - IEEE Journal of Solid …, 2021 - ieeexplore.ieee.org
Computing-in-memory (CIM) based on SRAM is a promising approach to achieving energy-
efficient multiply-and-accumulate (MAC) operations in artificial intelligence (AI) edge …

ARBiS: A hardware-efficient SRAM CIM CNN accelerator with cyclic-shift weight duplication and parasitic-capacitance charge sharing for ai edge application

C Zhao, J Fang, J Jiang, X Xue… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Computing-in-memory (CIM) relieves the Von Neumann bottleneck by storing the weights of
neural networks in memory arrays. However, two challenges still exist, hindering the efficient …

YOLoC: deploy large-scale neural network by ROM-based computing-in-memory using residual branch on a chip

Y Chen, G Yin, Z Tan, M Lee, Z Yang, Y Liu… - Proceedings of the 59th …, 2022 - dl.acm.org
Computing-in-memory (CiM) is a promising technique to achieve high energy efficiency in
data-intensive matrix-vector multiplication (MVM) by relieving the memory bottleneck …

A charge-sharing based 8t sram in-memory computing for edge dnn acceleration

K Lee, S Cheon, J Jo, W Choi… - 2021 58th ACM/IEEE …, 2021 - ieeexplore.ieee.org
This paper presents a charge-sharing based customized 8T SRAM in-memory computing
(IMC) architecture. In the proposed IMC approach, the multiply-accumulate (MAC) operation …

Training Neural Networks With In-Memory-Computing Hardware and Multi-Level Radix-4 Inputs

C Grimm, J Lee, N Verma - … on Circuits and Systems I: Regular …, 2024 - ieeexplore.ieee.org
Training Deep Neural Networks (DNNs) requires a large number of operations, among
which matrix-vector multiplies (MVMs), often of high dimensionality, dominate. In-Memory …

Novel method enabling forward and backward propagations in NAND flash memory for on-chip learning

ST Lee, G Yeom, H Yoo, HS Kim, S Lim… - … on Electron Devices, 2021 - ieeexplore.ieee.org
In this work, a novel synaptic array architecture enabling forward propagation (FP) and
backward propagation (BP) in the NAND flash memory is proposed for the first time for on …

MARS: Multimacro architecture SRAM CIM-based accelerator with co-designed compressed neural networks

SH Sie, JL Lee, YR Chen, ZW Yeh, Z Li… - … on Computer-Aided …, 2021 - ieeexplore.ieee.org
Convolutional neural networks (CNNs) play a key role in deep learning applications.
However, the large storage overheads and the substantial computational cost of CNNs are …