Grounding neuroscience in behavioral changes using artificial neural networks

GW Lindsay - Current opinion in neurobiology, 2024 - Elsevier
Connecting neural activity to function is a common aim in neuroscience. How to define and
conceptualize function, however, can vary. Here I focus on grounding this goal in the specific …

[HTML][HTML] A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning

M Mundt, Y Hong, I Pliushch, V Ramesh - Neural Networks, 2023 - Elsevier
Current deep learning methods are regarded as favorable if they empirically perform well on
dedicated test sets. This mentality is seamlessly reflected in the resurfacing area of continual …

Self-supervised video pretraining yields robust and more human-aligned visual representations

N Parthasarathy, SM Eslami… - Advances in Neural …, 2023 - proceedings.neurips.cc
Humans learn powerful representations of objects and scenes by observing how they evolve
over time. Yet, outside of specific tasks that require explicit temporal understanding, static …

Curriculum learning with infant egocentric videos

S Sheybani, H Hansaria, J Wood… - Advances in Neural …, 2024 - proceedings.neurips.cc
Infants possess a remarkable ability to rapidly learn and process visual inputs. As an infant's
mobility increases, so does the variety and dynamics of their visual inputs. Is this change in …

Artificial neural network language models align neurally and behaviorally with humans even after a developmentally realistic amount of training

EA Hosseini, M Schrimpf, Y Zhang, S Bowman… - BioRxiv, 2022 - biorxiv.org
Artificial neural networks have emerged as computationally plausible models of human
language processing. A major criticism of these models is that the amount of training data …

Learning high-level visual representations from a child's perspective without strong inductive biases

AE Orhan, BM Lake - Nature Machine Intelligence, 2024 - nature.com
Young children develop sophisticated internal models of the world based on their visual
experience. Can such models be learned from a child's visual experience without strong …

Spikformer v2: Join the high accuracy club on imagenet with an snn ticket

Z Zhou, K Che, W Fang, K Tian, Y Zhu, S Yan… - arXiv preprint arXiv …, 2024 - arxiv.org
Spiking Neural Networks (SNNs), known for their biologically plausible architecture, face the
challenge of limited performance. The self-attention mechanism, which is the cornerstone of …

Not all semantics are created equal: Contrastive self-supervised learning with automatic temperature individualization

ZH Qiu, Q Hu, Z Yuan, D Zhou, L Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
In this paper, we aim to optimize a contrastive loss with individualized temperatures in a
principled and systematic manner for self-supervised learning. The common practice of …

Self-supervised video pretraining yields human-aligned visual representations

N Parthasarathy, SM Eslami, J Carreira… - arXiv preprint arXiv …, 2022 - arxiv.org
Humans learn powerful representations of objects and scenes by observing how they evolve
over time. Yet, outside of specific tasks that require explicit temporal understanding, static …

Artificial neural network language models predict human brain responses to language even after a developmentally realistic amount of training

EA Hosseini, M Schrimpf, Y Zhang… - Neurobiology of …, 2024 - direct.mit.edu
Artificial neural networks have emerged as computationally plausible models of human
language processing. A major criticism of these models is that the amount of training data …