[HTML][HTML] Model metamers reveal divergent invariances between biological and artificial neural networks

J Feather, G Leclerc, A Mądry, JH McDermott - Nature Neuroscience, 2023 - nature.com
Deep neural network models of sensory systems are often proposed to learn
representational transformations with invariances like those in the brain. To reveal these …

Metamers of neural networks reveal divergence from human perceptual systems

J Feather, A Durango, R Gonzalez… - Advances in Neural …, 2019 - proceedings.neurips.cc
Deep neural networks have been embraced as models of sensory systems, instantiating
representational transformations that appear to resemble those in the visual and auditory …

Learning invariant representations from EEG via adversarial inference

O Özdenizci, Y Wang, T Koike-Akino… - IEEE access, 2020 - ieeexplore.ieee.org
Discovering and exploiting shared, invariant neural activity in electroencephalogram (EEG)
based classification tasks is of significant interest for generalizability of decoding models …

Towards interpretable deep neural networks by leveraging adversarial examples

Y Dong, H Su, J Zhu, F Bao - arXiv preprint arXiv:1708.05493, 2017 - arxiv.org
Deep neural networks (DNNs) have demonstrated impressive performance on a wide array
of tasks, but they are usually considered opaque since internal structure and learned …

Adversarial examples that fool both computer vision and time-limited humans

G Elsayed, S Shankar, B Cheung… - Advances in neural …, 2018 - proceedings.neurips.cc
Abstract Machine learning models are vulnerable to adversarial examples: small changes to
images can cause computer vision models to make mistakes such as identifying a school …

Controversial stimuli: Pitting neural networks against each other as models of human cognition

T Golan, PC Raju… - Proceedings of the …, 2020 - National Acad Sciences
Distinct scientific theories can make similar predictions. To adjudicate between theories, we
must design experiments for which the theories make distinct predictions. Here we consider …

Adversarial robustness as a prior for learned representations

L Engstrom, A Ilyas, S Santurkar, D Tsipras… - arXiv preprint arXiv …, 2019 - arxiv.org
An important goal in deep learning is to learn versatile, high-level feature representations of
input data. However, standard networks' representations seem to possess shortcomings …

Neural networks with recurrent generative feedback

Y Huang, J Gornet, S Dai, Z Yu… - Advances in …, 2020 - proceedings.neurips.cc
Neural networks are vulnerable to input perturbations such as additive noise and
adversarial attacks. In contrast, human perception is much more robust to such …

Closer look at the transferability of adversarial examples: How they fool different models differently

F Waseda, S Nishikawa, TN Le… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks are vulnerable to adversarial examples (AEs), which have adversarial
transferability: AEs generated for the source model can mislead another (target) model's …

Human alignment of neural network representations

L Muttenthaler, J Dippel, L Linhardt… - arXiv preprint arXiv …, 2022 - arxiv.org
Today's computer vision models achieve human or near-human level performance across a
wide variety of vision tasks. However, their architectures, data, and learning algorithms differ …