Compute in‐memory with non‐volatile elements for neural networks: A review from a co‐design perspective

W Haensch, A Raghunathan, K Roy… - Advanced …, 2023 - Wiley Online Library
Deep learning has become ubiquitous, touching daily lives across the globe. Today,
traditional computer architectures are stressed to their limits in efficiently executing the …

Overview and outlook of emerging non-volatile memories

M Si, HY Cheng, T Ando, G Hu, PD Ye - Mrs Bulletin, 2021 - Springer
Memory technologies with higher density, higher bandwidth, lower power consumption,
higher speed, and lower cost are in high demand in the current big data era. In this paper …

Algorithm for training neural networks on resistive device arrays

T Gokmen, W Haensch - Frontiers in neuroscience, 2020 - frontiersin.org
Hardware architectures composed of resistive cross-point device arrays can provide
significant power and speed benefits for deep neural network training workloads using …

Model-based synthetic geoelectric sampling for magnetotelluric inversion with deep neural networks

R Li, N Yu, X Wang, Y Liu, Z Cai… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Neural networks (NNs) are efficient tools for rapidly obtaining geoelectric models to solve
magnetotelluric (MT) inversion problems. Training an NN with strong predictive power …

Using the IBM analog in-memory hardware acceleration kit for neural network training and inference

M Le Gallo, C Lammie, J Büchel, F Carta… - APL Machine …, 2023 - pubs.aip.org
ABSTRACT Analog In-Memory Computing (AIMC) is a promising approach to reduce the
latency and energy consumption of Deep Neural Network (DNN) inference and training …

Fast and robust analog in-memory deep neural network training

MJ Rasch, F Carta, O Fagbohungbe… - Nature …, 2024 - nature.com
Analog in-memory computing is a promising future technology for efficiently accelerating
deep learning networks. While using in-memory computing to accelerate the inference …

Neural network training with asymmetric crosspoint elements

M Onen, T Gokmen, TK Todorov, T Nowicki… - Frontiers in artificial …, 2022 - frontiersin.org
Analog crossbar arrays comprising programmable non-volatile resistors are under intense
investigation for acceleration of deep neural network training. However, the ubiquitous …

Impact of asymmetric weight update on neural network training with tiki-taka algorithm

C Lee, K Noh, W Ji, T Gokmen, S Kim - Frontiers in neuroscience, 2022 - frontiersin.org
Recent progress in novel non-volatile memory-based synaptic device technologies and their
feasibility for matrix-vector multiplication (MVM) has ignited active research on implementing …

Enabling training of neural networks on noisy hardware

T Gokmen - Frontiers in Artificial Intelligence, 2021 - frontiersin.org
Deep neural networks (DNNs) are typically trained using the conventional stochastic
gradient descent (SGD) algorithm. However, SGD performs poorly when applied to train …

Retention-aware zero-shifting technique for Tiki-Taka algorithm-based analog deep learning accelerator

K Noh, H Kwak, J Son, S Kim, M Um, M Kang, D Kim… - Science …, 2024 - science.org
We present the fabrication of 4 K-scale electrochemical random-access memory (ECRAM)
cross-point arrays for analog neural network training accelerator and an electrical …