As artificial intelligence calls for novel energy-efficient hardware, neuromorphic computing systems based on analog resistive switching memory (RSM) devices have drawn great …
Neural-network training can be slow and energy intensive, owing to the need to transfer the weight data for the network between conventional digital memory chips and processor chips …
Artificial intelligence (AI) has the ability of revolutionizing our lives and society in a radical way, by enabling machine learning in the industry, business, health, transportation, and …
Analog hardware accelerators, which perform computation within a dense memory array, have the potential to overcome the major bottlenecks faced by digital hardware for data …
There is a significant need to build efficient non-von Neumann computing systems for highly data-centric artificial intelligence related applications. Brain-inspired computing is one such …
A Laborieux, F Zenke - Advances in neural information …, 2022 - proceedings.neurips.cc
Equilibrium propagation (EP) is an alternative to backpropagation (BP) that allows the training of deep neural networks with local learning rules. It thus provides a compelling …
We survey recent progress in the use of analog memory devices to build neuromorphic hardware accelerators for deep learning applications. After an overview of deep learning …
The on-chip implementation of learning algorithms would speed up the training of neural networks in crossbar arrays. The circuit level design and implementation of a back …
Traditional computing systems based on the von Neumann architecture are fundamentally bottlenecked by data transfers between processors and memory. The emergence of data …