An overview of processing-in-memory circuits for artificial intelligence and machine learning

D Kim, C Yu, S Xie, Y Chen, JY Kim… - IEEE Journal on …, 2022 - ieeexplore.ieee.org
Artificial intelligence (AI) and machine learning (ML) are revolutionizing many fields of study,
such as visual recognition, natural language processing, autonomous vehicles, and …

Architecture of computing system based on chiplet

G Shan, Y Zheng, C Xing, D Chen, G Li, Y Yang - Micromachines, 2022 - mdpi.com
Computing systems are widely used in medical diagnosis, climate prediction, autonomous
vehicles, etc. As the key part of electronics, the performance of computing systems is crucial …

A 1.041-Mb/mm2 27.38-TOPS/W Signed-INT8 Dynamic-Logic-Based ADC-less SRAM Compute-in-Memory Macro in 28nm with Reconfigurable Bitwise Operation for …

B Yan, JL Hsu, PC Yu, CC Lee, Y Zhang… - … Solid-State Circuits …, 2022 - ieeexplore.ieee.org
Advanced intelligent embedded systems perform cognitive tasks with highly-efficient vector-
processing units for deep neural network (DNN) inference and other vector-based signal …

HERMES-Core—A 1.59-TOPS/mm2 PCM on 14-nm CMOS In-Memory Compute Core Using 300-ps/LSB Linearized CCO-Based ADCs

R Khaddam-Aljameh, M Stanisavljevic… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
We present a 256 256 in-memory compute (IMC) core designed and fabricated in 14-nm
CMOS technology with backend-integrated multi-level phase change memory (PCM). It …

A 65-nm 8T SRAM compute-in-memory macro with column ADCs for processing neural networks

C Yu, T Yoo, KTC Chai, TTH Kim… - IEEE Journal of Solid …, 2022 - ieeexplore.ieee.org
In this work, we present a novel 8T static random access memory (SRAM)-based compute-in-
memory (CIM) macro for processing neural networks with high energy efficiency. The …

A charge domain SRAM compute-in-memory macro with C-2C ladder-based 8-bit MAC unit in 22-nm FinFET process for edge inference

H Wang, R Liu, R Dorrance… - IEEE Journal of Solid …, 2023 - ieeexplore.ieee.org
Compute-in-memory (CiM) is one promising solution to address the memory bottleneck
existing in traditional computing architectures. However, the tradeoff between energy …

RAELLA: Reforming the arithmetic for efficient, low-resolution, and low-loss analog PIM: No retraining required!

T Andrulis, JS Emer, V Sze - … of the 50th Annual International Symposium …, 2023 - dl.acm.org
Processing-In-Memory (PIM) accelerators have the potential to efficiently run Deep Neural
Network (DNN) inference by reducing costly data movement and by using resistive RAM …

Scalable and programmable neural network inference accelerator based on in-memory computing

H Jia, M Ozatay, Y Tang, H Valavi… - IEEE Journal of Solid …, 2021 - ieeexplore.ieee.org
This work demonstrates a programmable in-memory-computing (IMC) inference accelerator
for scalable execution of neural network (NN) models, leveraging a high-signal-to-noise …

Proposal of analog in-memory computing with magnified tunnel magnetoresistance ratio and universal STT-MRAM cell

H Cai, Y Guo, B Liu, M Zhou, J Chen… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
In-memory computing (IMC) is an effective solution for energy-efficient artificial intelligence
applications. Analog IMC amortizes the power consumption of multiple sensing amplifiers …

HD-CIM: Hybrid-device computing-in-memory structure based on MRAM and SRAM to reduce weight loading energy of neural networks

H Zhang, J Liu, J Bai, S Li, L Luo, S Wei… - … on Circuits and …, 2022 - ieeexplore.ieee.org
SRAM based computing-in-memory (SRAM-CIM) techniques have been widely studied for
neural networks (NNs) to solve the “Von Neumann bottleneck”. However, as the scale of the …