Are deep neural networks adequate behavioral models of human visual perception?

FA Wichmann, R Geirhos - Annual Review of Vision Science, 2023 - annualreviews.org
Deep neural networks (DNNs) are machine learning algorithms that have revolutionized
computer vision due to their remarkable successes in tasks like object classification and …

HIVE: Evaluating the human interpretability of visual explanations

SSY Kim, N Meister, VV Ramaswamy, R Fong… - … on Computer Vision, 2022 - Springer
As AI technology is increasingly applied to high-impact, high-risk domains, there have been
a number of new methods aimed at making AI models more human interpretable. Despite …

What do vision transformers learn? a visual exploration

A Ghiasi, H Kazemi, E Borgnia, S Reich, M Shu… - arXiv preprint arXiv …, 2022 - arxiv.org
Vision transformers (ViTs) are quickly becoming the de-facto architecture for computer
vision, yet we understand very little about why they work and what they learn. While existing …

Scale alone does not improve mechanistic interpretability in vision models

RS Zimmermann, T Klein… - Advances in Neural …, 2024 - proceedings.neurips.cc
In light of the recent widespread adoption of AI systems, understanding the internal
information processing of neural networks has become increasingly critical. Most recently …

Don't trust your eyes: on the (un) reliability of feature visualizations

R Geirhos, RS Zimmermann, B Bilodeau… - arXiv preprint arXiv …, 2023 - arxiv.org
How do neural networks extract patterns from pixels? Feature visualizations attempt to
answer this important question by visualizing highly activating patterns through optimization …

Understanding of the predictability and uncertainty in population distributions empowered by visual analytics

P Luo, C Chen, S Gao, X Zhang… - International Journal …, 2024 - Taylor & Francis
Understanding the intricacies of fine-grained population distribution, including both
predictability and uncertainty, is crucial for urban planning, social equity, and environmental …

[HTML][HTML] Testing methods of neural systems understanding

GW Lindsay, D Bau - Cognitive Systems Research, 2023 - Elsevier
Neuroscientists apply a range of analysis tools to recorded neural activity in order to glean
insights into how neural circuits drive behavior in organisms. Despite the fact that these tools …

Sim2word: Explaining similarity with representative attribute words via counterfactual explanations

R Chen, J Li, H Zhang, C Sheng, L Liu… - ACM Transactions on …, 2023 - dl.acm.org
Recently, we have witnessed substantial success using the deep neural network in many
tasks. Although there still exist concerns about the explainability of decision making, it is …

Unlocking feature visualization for deep network with magnitude constrained optimization

T FEL, T Boissin, V Boutin, A PICARD… - Advances in …, 2023 - proceedings.neurips.cc
Feature visualization has gained significant popularity as an explainability method,
particularly after the influential work by Olah et al. in 2017. Despite its success, its …

Unlocking feature visualization for deeper networks with magnitude constrained optimization

T Fel, T Boissin, V Boutin, A Picard, P Novello… - arXiv preprint arXiv …, 2023 - arxiv.org
Feature visualization has gained substantial popularity, particularly after the influential work
by Olah et al. in 2017, which established it as a crucial tool for explainability. However, its …