Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity

M Jazayeri, S Ostojic - Current opinion in neurobiology, 2021 - Elsevier
The ongoing exponential rise in recording capacity calls for new approaches for analysing
and interpreting neural data. Effective dimensionality has emerged as an important property …

Signatures of task learning in neural representations

H Gurnani, NAC Gajic - Current opinion in neurobiology, 2023 - Elsevier
While neural plasticity has long been studied as the basis of learning, the growth of large-
scale neural recording techniques provides a unique opportunity to study how learning …

Feature learning in deep classifiers through intermediate neural collapse

A Rangamani, M Lindegaard… - International …, 2023 - proceedings.mlr.press
In this paper, we conduct an empirical study of the feature learning process in deep
classifiers. Recent research has identified a training phenomenon called Neural Collapse …

Neural collapse with normalized features: A geometric analysis over the riemannian manifold

C Yaras, P Wang, Z Zhu… - Advances in neural …, 2022 - proceedings.neurips.cc
When training overparameterized deep networks for classification tasks, it has been widely
observed that the learned features exhibit a so-called" neural collapse'" phenomenon. More …

Neural representational geometry underlies few-shot concept learning

B Sorscher, S Ganguli… - Proceedings of the …, 2022 - National Acad Sciences
Understanding the neural basis of the remarkable human cognitive capacity to learn novel
concepts from just one or a few sensory experiences constitutes a fundamental problem. We …

Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion

M Farrell, S Recanatesi, T Moore, G Lajoie… - Nature Machine …, 2022 - nature.com
Neural networks need the right representations of input data to learn. Here we ask how
gradient-based learning shapes a fundamental property of representations in recurrent …

High-performing neural network models of visual cortex benefit from high latent dimensionality

E Elmoznino, MF Bonner - PLOS Computational Biology, 2024 - journals.plos.org
Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core
representational principles of computational models in neuroscience. Here we examined the …

[HTML][HTML] Landscape and training regimes in deep learning

M Geiger, L Petrini, M Wyart - Physics Reports, 2021 - Elsevier
Deep learning algorithms are responsible for a technological revolution in a variety of tasks
including image recognition or Go playing. Yet, why they work is not understood. Ultimately …

Deep nonparametric regression on approximate manifolds: Nonasymptotic error bounds with polynomial prefactors

Y Jiao, G Shen, Y Lin, J Huang - The Annals of Statistics, 2023 - projecteuclid.org
Deep nonparametric regression on approximate manifolds: Nonasymptotic error bounds with
polynomial prefactors Page 1 The Annals of Statistics 2023, Vol. 51, No. 2, 691–716 …

Multitask representations in the human cortex transform along a sensory-to-motor hierarchy

T Ito, JD Murray - Nature neuroscience, 2023 - nature.com
Human cognition recruits distributed neural processes, yet the organizing computational and
functional architectures remain unclear. Here, we characterized the geometry and …