Focus on the common good: Group distributional robustness follows

V Piratla, P Netrapalli, S Sarawagi - arXiv preprint arXiv:2110.02619, 2021 - arxiv.org
We consider the problem of training a classification model with group annotated training
data. Recent work has established that, if there is distribution shift across different groups …

Simplicity bias leads to amplified performance disparities

SJ Bell, L Sagun - Proceedings of the 2023 ACM Conference on …, 2023 - dl.acm.org
Which parts of a dataset will a given model find difficult? Recent work has shown that SGD-
trained models have a bias towards simplicity, leading them to prioritize learning a majority …

Confound-leakage: confound removal in machine learning leads to leakage

S Hamdan, BC Love, GG von Polier, S Weis… - …, 2023 - academic.oup.com
Background Machine learning (ML) approaches are a crucial component of modern data
analysis in many fields, including epidemiology and medicine. Nonlinear ML methods often …

Mitigating Simplicity Bias in Deep Learning for Improved OOD Generalization and Robustness

B Vasudeva, K Shahabi, V Sharan - arXiv preprint arXiv:2310.06161, 2023 - arxiv.org
Neural networks (NNs) are known to exhibit simplicity bias where they tend to prefer
learning'simple'features over more'complex'ones, even when the latter may be more …

Encouraging intra-class diversity through a reverse contrastive loss for single-source domain generalization

T Duboudin, E Dellandréa, C Abgrall… - Proceedings of the …, 2021 - openaccess.thecvf.com
Traditional deep learning algorithms often fail to generalize when they are tested outside of
the domain of the training data. The issue can be mitigated by using unlabeled data from the …

Clarifying status of DNNs as models of human vision

JS Bowers, G Malhotra, M Dujmović… - Behavioral and …, 2023 - publications.aston.ac.uk
On several key issues we agree with the commentators. Perhaps most importantly, everyone
seems to agree that psychology has an important role to play in building better models of …

Roadblocks for temporarily disabling shortcuts and learning new knowledge

H Niu, H Li, F Zhao, B Li - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Deep learning models have been found with a tendency of relying on shortcuts, ie, decision
rules that perform well on standard benchmarks but fail when transferred to more …

Drop the shortcuts: image augmentation improves fairness and decreases AI detection of race and other demographics from medical images

R Wang, PC Kuo, LC Chen, KP Seastedt… - …, 2024 - thelancet.com
Background It has been shown that AI models can learn race on medical images, leading to
algorithmic bias. Our aim in this study was to enhance the fairness of medical image models …

An information-theoretic method to automatic shortcut avoidance and domain generalization for dense prediction tasks

WQ Chuah, R Tennakoon… - … on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Deep convolutional neural networks for dense prediction tasks are commonly optimized
using synthetic data, as generating pixel-wise annotations for real-world data is laborious …

Identifying and benchmarking natural out-of-context prediction problems

D Madras, R Zemel - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Deep learning systems frequently fail at out-of-context (OOC) prediction, the problem of
making reliable predictions on uncommon or unusual inputs or subgroups of the training …