[HTML][HTML] Brain-inspired learning in artificial neural networks: a review

S Schmidgall, R Ziaei, J Achterberg, L Kirsch… - APL Machine …, 2024 - pubs.aip.org
Artificial neural networks (ANNs) have emerged as an essential tool in machine learning,
achieving remarkable success across diverse domains, including image and speech …

How connectivity structure shapes rich and lazy learning in neural circuits

YH Liu, A Baratin, J Cornford, S Mihalas… - arXiv preprint arXiv …, 2023 - arxiv.org
In theoretical neuroscience, recent work leverages deep learning tools to explore how some
network attributes critically influence its learning dynamics. Notably, initial weight …

[HTML][HTML] Mobilization of endocannabinoids by midbrain dopamine neurons is required for the encoding of reward prediction

MÁ Luján, DP Covey, R Young-Morrison… - Nature …, 2023 - nature.com
Brain levels of the endocannabinoid 2-arachidonoylglycerol (2-AG) shape motivated
behavior and nucleus accumbens (NAc) dopamine release. However, it is not clear whether …

Sparseprop: Efficient event-based simulation and training of sparse recurrent spiking neural networks

R Engelken - Advances in Neural Information Processing …, 2023 - proceedings.neurips.cc
Abstract Spiking Neural Networks (SNNs) are biologically-inspired models that are capable
of processing information in streams of action potentials. However, simulating and training …

A neuro-mimetic realization of the common model of cognition via hebbian learning and free energy minimization

AG Ororbia, MA Kelly - Proceedings of the AAAI Symposium Series, 2023 - ojs.aaai.org
Over the last few years, large neural generative models, capable of synthesizing
semantically rich passages of text or producing complex images, have recently emerged as …

Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules

YH Liu, A Ghosh, B Richards… - Advances in Neural …, 2022 - proceedings.neurips.cc
To unveil how the brain learns, ongoing work seeks biologically-plausible approximations of
gradient descent algorithms for training recurrent neural networks (RNNs). Yet, beyond task …

Spatio-Temporal Approximation: A Training-Free SNN Conversion for Transformers

Y Jiang, K Hu, T Zhang, H Gao, Y Liu… - The Twelfth …, 2024 - openreview.net
Spiking neural networks (SNNs) are energy-efficient and hold great potential for large-scale
inference. Since training SNNs from scratch is costly and has limited performance …

[HTML][HTML] Transition to chaos separates learning regimes and relates to measure of consciousness in recurrent neural networks

D Mastrovito, YH Liu, L Kusmierz, E Shea-Brown… - bioRxiv, 2024 - ncbi.nlm.nih.gov
Recurrent neural networks exhibit chaotic dynamics when the variance in their connection
strengths exceed a critical value. Recent work indicates connection variance also modulates …

Structured flexibility in recurrent neural networks via neuromodulation

JC Costacurta, S Bhandarkar, DM Zoltowski… - bioRxiv, 2024 - biorxiv.org
The goal of theoretical neuroscience is to develop models that help us better understand
biological intelligence. Such models range broadly in complexity and biological detail. For …

Geometry of learning and representation in neural networks

P Sokół - 2023 - search.proquest.com
Theoretical neuroscience has come to face a unique set of opportunities and challenges. By
virtue of being at the nexus of experimental neurobiology and machine learning, theoretical …