Compute-in-memory chips for deep learning: Recent trends and prospects

S Yu, H Jiang, S Huang, X Peng… - IEEE circuits and systems …, 2021 - ieeexplore.ieee.org
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …

A full spectrum of computing-in-memory technologies

Z Sun, S Kvatinsky, X Si, A Mehonic, Y Cai… - Nature Electronics, 2023 - nature.com
Computing in memory (CIM) could be used to overcome the von Neumann bottleneck and to
provide sustainable improvements in computing throughput and energy efficiency …

[HTML][HTML] An analog-AI chip for energy-efficient speech recognition and transcription

S Ambrogio, P Narayanan, A Okazaki, A Fasoli… - Nature, 2023 - nature.com
Abstract Models of artificial intelligence (AI) that have billions of parameters can achieve
high accuracy across a range of tasks,, but they exacerbate the poor energy efficiency of …

16.4 An 89TOPS/W and 16.3TOPS/mm2 All-Digital SRAM-Based Full-Precision Compute-In Memory Macro in 22nm for Machine-Learning Edge Applications

YD Chih, PH Lee, H Fujiwara, YC Shih… - … Solid-State Circuits …, 2021 - ieeexplore.ieee.org
From the cloud to edge devices, artificial intelligence (AI) and machine learning (ML) are
widely used in many cognitive tasks, such as image classification and speech recognition. In …

Challenges and trends of SRAM-based computing-in-memory for AI edge devices

CJ Jhang, CX Xue, JM Hung… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
When applied to artificial intelligence edge devices, the conventionally von Neumann
computing architecture imposes numerous challenges (eg, improving the energy efficiency) …

[图书][B] Efficient processing of deep neural networks

V Sze, YH Chen, TJ Yang, JS Emer - 2020 - Springer
This book provides a structured treatment of the key principles and techniques for enabling
efficient processing of deep neural networks (DNNs). DNNs are currently widely used for …

16.1 A 22nm 4Mb 8b-precision ReRAM computing-in-memory macro with 11.91 to 195.7 TOPS/W for tiny AI edge devices

CX Xue, JM Hung, HY Kao, YH Huang… - … Solid-State Circuits …, 2021 - ieeexplore.ieee.org
Battery-powered tiny-AI edge devices require large-capacity nonvolatile compute-in-memory
(nvCIM), with multibit input (IN), weight (W), and output (OUT) precision to support complex …

A 5-nm 254-TOPS/W 221-TOPS/mm2 Fully-Digital Computing-in-Memory Macro Supporting Wide-Range Dynamic-Voltage-Frequency Scaling and Simultaneous …

H Fujiwara, H Mori, WC Zhao… - … Solid-State Circuits …, 2022 - ieeexplore.ieee.org
Computing-in-memory (CIM) is being widely explored to minimize power consumption in
data movement and multiply-and-accumulate (MAC) for edge-AI devices. Although most …

16.3 A 28nm 384kb 6T-SRAM computation-in-memory macro with 8b precision for AI edge chips

JW Su, YC Chou, R Liu, TW Liu, PJ Lu… - … Solid-State Circuits …, 2021 - ieeexplore.ieee.org
Recent SRAM-based computation-in-memory (CIM) macros enable mid-to-high precision
multiply-and-accumulate (MAC) operations with improved energy efficiency using ultra …

A 28nm 1Mb time-domain computing-in-memory 6T-SRAM macro with a 6.6 ns latency, 1241GOPS and 37.01 TOPS/W for 8b-MAC operations for edge-AI devices

PC Wu, JW Su, YL Chung, LY Hong… - … Solid-State Circuits …, 2022 - ieeexplore.ieee.org
SRAM-based computing in memory (SRAM-CIM) is an attractive approach to improve the
energy efficiency (EF) of edge-AI devices performing multiply-and-accumulate (MAC) …