Memory devices and applications for in-memory computing

A Sebastian, M Le Gallo, R Khaddam-Aljameh… - Nature …, 2020 - nature.com
Traditional von Neumann computing systems involve separate processing and memory
units. However, data movement is costly in terms of time and energy and this problem is …

Model compression and hardware acceleration for neural networks: A comprehensive survey

L Deng, G Li, S Han, L Shi, Y Xie - Proceedings of the IEEE, 2020 - ieeexplore.ieee.org
Domain-specific hardware is becoming a promising topic in the backdrop of improvement
slow down for general-purpose processors due to the foreseeable end of Moore's Law …

Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices

YH Chen, TJ Yang, J Emer… - IEEE Journal on Emerging …, 2019 - ieeexplore.ieee.org
A recent trend in deep neural network (DNN) development is to extend the reach of deep
learning applications to platforms that are more resource and energy-constrained, eg …

XNOR-SRAM: In-memory computing SRAM macro for binary/ternary deep neural networks

S Yin, Z Jiang, JS Seo, M Seok - IEEE Journal of Solid-State …, 2020 - ieeexplore.ieee.org
We present XNOR-SRAM, a mixed-signal in-memory computing (IMC) SRAM macro that
computes ternary-XNOR-and-accumulate (XAC) operations in binary/ternary deep neural …

Hardware implementation of memristor-based artificial neural networks

F Aguirre, A Sebastian, M Le Gallo, W Song… - Nature …, 2024 - nature.com
Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL)
techniques, which rely on networks of connected simple computing units operating in …

Hardware and software optimizations for accelerating deep neural networks: Survey of current trends, challenges, and the road ahead

M Capra, B Bussolino, A Marchisio, G Masera… - IEEE …, 2020 - ieeexplore.ieee.org
Currently, Machine Learning (ML) is becoming ubiquitous in everyday life. Deep Learning
(DL) is already present in many applications ranging from computer vision for medicine to …

[图书][B] Efficient processing of deep neural networks

V Sze, YH Chen, TJ Yang, JS Emer - 2020 - Springer
This book provides a structured treatment of the key principles and techniques for enabling
efficient processing of deep neural networks (DNNs). DNNs are currently widely used for …

[HTML][HTML] Analog architectures for neural network acceleration based on non-volatile memory

TP Xiao, CH Bennett, B Feinberg, S Agarwal… - Applied Physics …, 2020 - pubs.aip.org
Analog hardware accelerators, which perform computation within a dense memory array,
have the potential to overcome the major bottlenecks faced by digital hardware for data …

A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute

H Valavi, PJ Ramadge, E Nestler… - IEEE Journal of Solid …, 2019 - ieeexplore.ieee.org
Large-scale matrix-vector multiplications, which dominate in deep neural networks (DNNs),
are limited by data movement in modern VLSI technologies. This paper addresses data …

15.1 a programmable neural-network inference accelerator based on scalable in-memory computing

H Jia, M Ozatay, Y Tang, H Valavi… - … Solid-State Circuits …, 2021 - ieeexplore.ieee.org
This paper presents a scalable neural-network (NN) inference accelerator in 16nm, based
on an array of programmable cores employing mixed-signal In-Memory Computing (IMC) …