[HTML][HTML] Analyzing biological and artificial neural networks: challenges with opportunities for synergy?

DGT Barrett, AS Morcos, JH Macke - Current opinion in neurobiology, 2019 - Elsevier
Highlights•Artificial and biological neural networks can be analyzed using similar
methods.•Neural analysis has revealed similarities between the representations in artificial …

Modeling similarity and psychological space

BD Roads, BC Love - Annual Review of Psychology, 2024 - annualreviews.org
Similarity and categorization are fundamental processes in human cognition that help
complex organisms make sense of the cacophony of information in their environment. These …

Memorization without overfitting: Analyzing the training dynamics of large language models

K Tirumala, A Markosyan… - Advances in …, 2022 - proceedings.neurips.cc
Despite their wide adoption, the underlying training and memorization dynamics of very
large language models is not well understood. We empirically study exact memorization in …

Do vision transformers see like convolutional neural networks?

M Raghu, T Unterthiner, S Kornblith… - Advances in neural …, 2021 - proceedings.neurips.cc
Convolutional neural networks (CNNs) have so far been the de-facto model for visual data.
Recent work has shown that (Vision) Transformer models (ViT) can achieve comparable or …

Layer-wise analysis of a self-supervised speech representation model

A Pasad, JC Chou, K Livescu - 2021 IEEE Automatic Speech …, 2021 - ieeexplore.ieee.org
Recently proposed self-supervised learning approaches have been successful for pre-
training speech representation models. The utility of these learned representations has been …

Learning from failure: De-biasing classifier from biased classifier

J Nam, H Cha, S Ahn, J Lee… - Advances in Neural …, 2020 - proceedings.neurips.cc
Neural networks often learn to make predictions that overly rely on spurious corre-lation
existing in the dataset, which causes the model to be biased. While previous work tackles …

Rapid learning or feature reuse? towards understanding the effectiveness of maml

A Raghu, M Raghu, S Bengio, O Vinyals - arXiv preprint arXiv:1909.09157, 2019 - arxiv.org
An important research direction in machine learning has centered around developing meta-
learning algorithms to tackle few-shot learning. An especially successful algorithm has been …

Similarity of neural network representations revisited

S Kornblith, M Norouzi, H Lee… - … conference on machine …, 2019 - proceedings.mlr.press
Recent work has sought to understand the behavior of neural networks by comparing
representations between layers and between different trained models. We examine methods …

Transfusion: Understanding transfer learning for medical imaging

M Raghu, C Zhang, J Kleinberg… - Advances in neural …, 2019 - proceedings.neurips.cc
Transfer learning from natural image datasets, particularly ImageNet, using standard large
models and corresponding pretrained weights has become a de-facto method for deep …

Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth

T Nguyen, M Raghu, S Kornblith - arXiv preprint arXiv:2010.15327, 2020 - arxiv.org
A key factor in the success of deep neural networks is the ability to scale models to improve
performance by varying the architecture depth and width. This simple property of neural …