An overview of processing-in-memory circuits for artificial intelligence and machine learning

D Kim, C Yu, S Xie, Y Chen, JY Kim… - IEEE Journal on …, 2022 - ieeexplore.ieee.org
Artificial intelligence (AI) and machine learning (ML) are revolutionizing many fields of study,
such as visual recognition, natural language processing, autonomous vehicles, and …

Trending IC design directions in 2022

CH Chan, L Cheng, W Deng, P Feng… - Journal of …, 2022 - iopscience.iop.org
For the non-stop demands for a better and smarter society, the number of electronic devices
keeps increasing exponentially; and the computation power, communication data rate, smart …

A 28nm 29.2 TFLOPS/W BF16 and 36.5 TOPS/W INT8 reconfigurable digital CIM processor with unified FP/INT pipeline and bitwise in-memory booth multiplication for …

F Tu, Y Wang, Z Wu, L Liang, Y Ding… - … Solid-State Circuits …, 2022 - ieeexplore.ieee.org
Many computing-in-memory (CIM) processors have been proposed for edge deep learning
(DL) acceleration. They usually rely on analog CIM techniques to achieve high-efficiency NN …

Diana: An end-to-end hybrid digital and analog neural network soc for the edge

P Houshmand, GM Sarda, V Jain… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
DIgital-ANAlog (DIANA), a heterogeneous multi-core accelerator, combines a reduced
instruction set computer-five (RISC-V) host processor with an analog in-memory computing …

DIANA: An end-to-end energy-efficient digital and ANAlog hybrid neural network SoC

K Ueyoshi, IA Papistas, P Houshmand… - … Solid-State Circuits …, 2022 - ieeexplore.ieee.org
Energy-efficient matrix-vector multiplications (MVMs) are key to bringing neural network
(NN) inference to edge devices. This has led to a wide range of state-of-the-art MVM …

A 28nm 15.59 µJ/token full-digital bitline-transpose CIM-based sparse transformer accelerator with pipeline/parallel reconfigurable modes

F Tu, Z Wu, Y Wang, L Liang, L Liu… - … Solid-State Circuits …, 2022 - ieeexplore.ieee.org
Transformer models have achieved state-of-the-art results in many fields, like natural
language processing and computer vision, but their large number of matrix multiplications …

TD-SRAM: Time-domain-based in-memory computing macro for binary neural networks

J Song, Y Wang, M Guo, X Ji, K Cheng… - … on Circuits and …, 2021 - ieeexplore.ieee.org
In-Memory Computing (IMC), which takes advantage of analog multiplication-accumulation
(MAC) insides memory, is promising to alleviate the Von-Neumann bottleneck and improve …

A 8-b-precision 6T SRAM computing-in-memory macro using segmented-bitline charge-sharing scheme for AI edge chips

JW Su, YC Chou, R Liu, TW Liu, PJ Lu… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
Advances in static random access memory (SRAM)-CIM devices are meant to increase
capacity while improving energy efficiency (EF) and reducing computing latency (). This …

A heterogeneous in-memory computing cluster for flexible end-to-end inference of real-world deep neural networks

A Garofalo, G Ottavi, F Conti… - IEEE Journal on …, 2022 - ieeexplore.ieee.org
Deployment of modern TinyML tasks on small battery-constrained IoT devices requires high
computational energy efficiency. Analog In-Memory Computing (IMC) using non-volatile …

ReDCIM: Reconfigurable digital computing-in-memory processor with unified FP/INT pipeline for cloud AI acceleration

F Tu, Y Wang, Z Wu, L Liang, Y Ding… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
Cloud AI acceleration has drawn great attention in recent years, as big models are
becoming a popular trend in deep learning. Cloud AI runs high-efficiency inference, high …