Compute-in-memory chips for deep learning: Recent trends and prospects

S Yu, H Jiang, S Huang, X Peng… - IEEE circuits and systems …, 2021 - ieeexplore.ieee.org
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …

RRAM for compute-in-memory: From inference to training

S Yu, W Shim, X Peng, Y Luo - IEEE Transactions on Circuits …, 2021 - ieeexplore.ieee.org
To efficiently deploy machine learning applications to the edge, compute-in-memory (CIM)
based hardware accelerator is a promising solution with improved throughput and energy …

DNN+ NeuroSim V2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training

X Peng, S Huang, H Jiang, A Lu… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
DNN+ NeuroSim is an integrated framework to benchmark compute-in-memory (CIM)
accelerators for deep neural networks, with hierarchical design options from device-level, to …

Impact of asymmetric weight update on neural network training with tiki-taka algorithm

C Lee, K Noh, W Ji, T Gokmen, S Kim - Frontiers in neuroscience, 2022 - frontiersin.org
Recent progress in novel non-volatile memory-based synaptic device technologies and their
feasibility for matrix-vector multiplication (MVM) has ignited active research on implementing …

[图书][B] Semiconductor Memory Devices and Circuits

S Yu - 2022 - taylorfrancis.com
This book covers semiconductor memory technologies from device bit-cell structures to
memory array design with an emphasis on recent industry scaling trends and cutting-edge …

Experimental measurement of ungated channel region conductance in a multi-terminal, metal oxide-based ECRAM

H Kwak, C Lee, C Lee, K Noh… - … Science and Technology, 2021 - iopscience.iop.org
Due to the rapid progress of artificial intelligence technology based on neural networks, the
amount of required computation has been increasing dramatically. To keep up with the ever …

Low-cost 7t-sram compute-in-memory design based on bit-line charge-sharing based analog-to-digital conversion

K Lee, J Kim, J Park - Proceedings of the 41st IEEE/ACM International …, 2022 - dl.acm.org
Although compute-in-memory (CIM) is considered as one of the promising solutions to
overcome memory wall problem, the variations in analog voltage computation and analog-to …

Quantization-aware in-situ training for reliable and accurate edge ai

JPC de Lima, L Carro - 2022 Design, Automation & Test in …, 2022 - ieeexplore.ieee.org
In-memory analog computation based on memristor crossbars has become the most
promising approach for DNN inference. Because compute and memory requirements are …

A Low-Cost Training Method of ReRAM Inference Accelerator Chips for Binarized Neural Networks to Recover Accuracy Degradation due to Statistical Variabilities

Z Chen, T Ohsawa - IEICE Transactions on Electronics, 2022 - search.ieice.org
A new software based in-situ training (SBIST) method to achieve high accuracies is
proposed for binarized neural networks inference accelerator chips in which measured …

Compute-in-Memory Architecture

H Jiang, S Huang, S Yu - Handbook of Computer Architecture, 2023 - Springer
In the era of big data and artificial intelligence, hardware advancement in throughput and
energy efficiency is essential for both cloud and edge computations. Because of the merged …