Compute-in-memory chips for deep learning: Recent trends and prospects

S Yu, H Jiang, S Huang, X Peng… - IEEE circuits and systems …, 2021 - ieeexplore.ieee.org
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall
problem in hardware accelerator design for deep learning. The input vector and weight …

Efficient acceleration of deep learning inference on resource-constrained edge devices: A review

MMH Shuvo, SK Islam, J Cheng… - Proceedings of the …, 2022 - ieeexplore.ieee.org
Successful integration of deep neural networks (DNNs) or deep learning (DL) has resulted
in breakthroughs in many areas. However, deploying these highly accurate models for data …

Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices

YH Chen, TJ Yang, J Emer… - IEEE Journal on Emerging …, 2019 - ieeexplore.ieee.org
A recent trend in deep neural network (DNN) development is to extend the reach of deep
learning applications to platforms that are more resource and energy-constrained, eg …

XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks

S Yin, Z Jiang, JS Seo, M Seok - IEEE Journal of Solid-State …, 2020 - ieeexplore.ieee.org
We present XNOR-SRAM, a mixed-signal in-memory computing (IMC) SRAM macro that
computes ternary-XNOR-and-accumulate (XAC) operations in binary/ternary deep neural …

Neuro-inspired computing with emerging nonvolatile memorys

S Yu - Proceedings of the IEEE, 2018 - ieeexplore.ieee.org
This comprehensive review summarizes state of the art, challenges, and prospects of the
neuro-inspired computing with emerging nonvolatile memory devices. First, we discuss the …

[图书][B] Efficient processing of deep neural networks

V Sze, YH Chen, TJ Yang, JS Emer - 2020 - Springer
This book provides a structured treatment of the key principles and techniques for enabling
efficient processing of deep neural networks (DNNs). DNNs are currently widely used for …

CONV-SRAM: An energy-efficient SRAM with in-memory dot-product computation for low-power convolutional neural networks

A Biswas, AP Chandrakasan - IEEE Journal of Solid-State …, 2018 - ieeexplore.ieee.org
This paper presents an energy-efficient static random access memory (SRAM) with
embedded dot-product computation capability, for binary-weight convolutional neural …

C3SRAM: An in-memory-computing SRAM macro based on robust capacitive coupling computing mechanism

Z Jiang, S Yin, JS Seo, M Seok - IEEE Journal of Solid-State …, 2020 - ieeexplore.ieee.org
This article presents C3SRAM, an in-memory-computing SRAM macro. The macro is an
SRAM module with the circuits embedded in bitcells and peripherals to perform hardware …

UNPU: An energy-efficient deep neural network accelerator with fully variable weight bit precision

J Lee, C Kim, S Kang, D Shin, S Kim… - IEEE Journal of Solid …, 2018 - ieeexplore.ieee.org
An energy-efficient deep neural network (DNN) accelerator, unified neural processing unit
(UNPU), is proposed for mobile deep learning applications. The UNPU can support both …

In‐Memory Vector‐Matrix Multiplication in Monolithic Complementary Metal–Oxide–Semiconductor‐Memristor Integrated Circuits: Design Choices, Challenges, and …

A Amirsoleimani, F Alibart, V Yon, J Xu… - Advanced Intelligent …, 2020 - Wiley Online Library
The low communication bandwidth between memory and processing units in conventional
von Neumann machines does not support the requirements of emerging applications that …