The error-backpropagation (backprop) algorithm remains the most common solution to the credit assignment problem in artificial neural networks. In neuroscience, it is unclear whether …
Training deep neural networks on large-scale datasets requires significant hardware resources whose costs (even on cloud platforms) put them out of reach of smaller …
JG Fernández, S Keemink, M van Gerven - Frontiers in Neuroscience, 2024 - frontiersin.org
Recurrent neural networks (RNNs) hold immense potential for computations due to their Turing completeness and sequential processing capabilities, yet existing methods for their …
Current state-of-the-art deep networks are all powered by backpropagation. In this paper, we explore alternatives to full backpropagation in the form of blockwise learning rules …
Modern feedforward convolutional neural networks (CNNs) can now solve some computer vision tasks at super-human levels. However, these networks only roughly mimic human …
We present a critical assessment of Piantadosi's (2023) claim that" Modern language models refute Chomsky's approach to language," focusing on four main points. First, despite …
Y Bengio - arXiv preprint arXiv:2007.15139, 2020 - arxiv.org
We show that a particular form of target propagation, ie, relying on learned inverses of each layer, which is differential, ie, where the target is a small perturbation of the forward …
Target Propagation (TP) is a biologically more plausible algorithm than the error backpropagation (BP) to train deep networks, and improving practicality of TP is an open …
J Pemberton, E Boven, R Apps… - Advances in neural …, 2021 - proceedings.neurips.cc
The brain solves the credit assignment problem remarkably well. For credit to be assigned across neural networks they must, in principle, wait for specific neural computations to finish …