Holomorphic equilibrium propagation computes exact gradients through finite size oscillations

A Laborieux, F Zenke - Advances in neural information …, 2022 - proceedings.neurips.cc
Equilibrium propagation (EP) is an alternative to backpropagation (BP) that allows the
training of deep neural networks with local learning rules. It thus provides a compelling …

Single-phase deep learning in cortico-cortical networks

W Greedy, HW Zhu, J Pemberton… - Advances in neural …, 2022 - proceedings.neurips.cc
The error-backpropagation (backprop) algorithm remains the most common solution to the
credit assignment problem in artificial neural networks. In neuroscience, it is unclear whether …

Backpropagation-free deep learning with recursive local representation alignment

AG Ororbia, A Mali, D Kifer, CL Giles - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Training deep neural networks on large-scale datasets requires significant hardware
resources whose costs (even on cloud platforms) put them out of reach of smaller …

Gradient-free training of recurrent neural networks using random perturbations

JG Fernández, S Keemink, M van Gerven - Frontiers in Neuroscience, 2024 - frontiersin.org
Recurrent neural networks (RNNs) hold immense potential for computations due to their
Turing completeness and sequential processing capabilities, yet existing methods for their …

Blockwise self-supervised learning at scale

SA Siddiqui, D Krueger, Y LeCun, S Deny - arXiv preprint arXiv …, 2023 - arxiv.org
Current state-of-the-art deep networks are all powered by backpropagation. In this paper, we
explore alternatives to full backpropagation in the form of blockwise learning rules …

Predictive coding feedback results in perceived illusory contours in a recurrent neural network

Z Pang, CB O'May, B Choksi, R VanRullen - Neural Networks, 2021 - Elsevier
Modern feedforward convolutional neural networks (CNNs) can now solve some computer
vision tasks at super-human levels. However, these networks only roughly mimic human …

Why linguistics will thrive in the 21st century: A reply to Piantadosi (2023)

J Kodner, S Payne, J Heinz - arXiv preprint arXiv:2308.03228, 2023 - arxiv.org
We present a critical assessment of Piantadosi's (2023) claim that" Modern language
models refute Chomsky's approach to language," focusing on four main points. First, despite …

Deriving differential target propagation from iterating approximate inverses

Y Bengio - arXiv preprint arXiv:2007.15139, 2020 - arxiv.org
We show that a particular form of target propagation, ie, relying on learned inverses of each
layer, which is differential, ie, where the target is a small perturbation of the forward …

Fixed-weight difference target propagation

T Shibuya, N Inoue, R Kawakami, I Sato - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Target Propagation (TP) is a biologically more plausible algorithm than the error
backpropagation (BP) to train deep networks, and improving practicality of TP is an open …

Cortico-cerebellar networks as decoupling neural interfaces

J Pemberton, E Boven, R Apps… - Advances in neural …, 2021 - proceedings.neurips.cc
The brain solves the credit assignment problem remarkably well. For credit to be assigned
across neural networks they must, in principle, wait for specific neural computations to finish …