Deep physical neural networks trained with backpropagation

LG Wright, T Onodera, MM Stein, T Wang… - Nature, 2022 - nature.com
Deep-learning models have become pervasive tools in science and engineering. However,
their energy requirements now increasingly limit their scalability. Deep-learning …

[HTML][HTML] An analog-AI chip for energy-efficient speech recognition and transcription

S Ambrogio, P Narayanan, A Okazaki, A Fasoli… - Nature, 2023 - nature.com
Abstract Models of artificial intelligence (AI) that have billions of parameters can achieve
high accuracy across a range of tasks,, but they exacerbate the poor energy efficiency of …

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

MJ Rasch, C Mackin, M Le Gallo, A Chen… - Nature …, 2023 - nature.com
Analog in-memory computing—a promising approach for energy-efficient acceleration of
deep learning workloads—computes matrix-vector multiplications but only approximately …

Deployment of artificial intelligence models on edge devices: A tutorial brief

M Żyliński, A Nassibi, I Rakhmatulin… - … on Circuits and …, 2023 - ieeexplore.ieee.org
Artificial intelligence (AI) on an edge device has enormous potential, including advanced
signal filtering, event detection, optimization in communications and data compression …

Fully on-chip MAC at 14 nm enabled by accurate row-wise programming of PCM-based weights and parallel vector-transport in duration-format

P Narayanan, S Ambrogio, A Okazaki… - … on Electron Devices, 2021 - ieeexplore.ieee.org
Hardware acceleration of deep learning using analog non-volatile memory (NVM) requires
large arrays with high device yield, high accuracy Multiply-ACcumulate (MAC) operations …

RAELLA: Reforming the arithmetic for efficient, low-resolution, and low-loss analog PIM: No retraining required!

T Andrulis, JS Emer, V Sze - … of the 50th Annual International Symposium …, 2023 - dl.acm.org
Processing-In-Memory (PIM) accelerators have the potential to efficiently run Deep Neural
Network (DNN) inference by reducing costly data movement and by using resistive RAM …

Optimised weight programming for analogue memory-based deep neural networks

C Mackin, MJ Rasch, A Chen, J Timcheck… - Nature …, 2022 - nature.com
Analogue memory-based deep neural networks provide energy-efficiency and per-area
throughput gains relative to state-of-the-art digital counterparts such as graphics processing …

A heterogeneous and programmable compute-in-memory accelerator architecture for analog-ai using dense 2-d mesh

S Jain, H Tsai, CT Chen, R Muralidhar… - … Transactions on Very …, 2022 - ieeexplore.ieee.org
We introduce a highly heterogeneous and programmable compute-in-memory (CIM)
accelerator architecture for deep neural network (DNN) inference. This architecture …

Using the IBM analog in-memory hardware acceleration kit for neural network training and inference

M Le Gallo, C Lammie, J Büchel, F Carta… - APL Machine …, 2023 - pubs.aip.org
ABSTRACT Analog In-Memory Computing (AIMC) is a promising approach to reduce the
latency and energy consumption of Deep Neural Network (DNN) inference and training …

Alpine: Analog in-memory acceleration with tight processor integration for deep learning

J Klein, I Boybat, YM Qureshi, M Dazzi… - IEEE Transactions …, 2022 - ieeexplore.ieee.org
Analog in-memory computing (AIMC) cores offers significant performance and energy
benefits for neural network inference with respect to digital logic (eg, CPUs). AIMCs …