Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arXiv preprint arXiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

The limits of fair medical imaging AI in real-world generalization

Y Yang, H Zhang, JW Gichoya, D Katabi… - Nature Medicine, 2024 - nature.com
As artificial intelligence (AI) rapidly approaches human-level performance in medical
imaging, it is crucial that it does not exacerbate or propagate healthcare disparities. Previous …

On feature learning in the presence of spurious correlations

P Izmailov, P Kirichenko, N Gruver… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep classifiers are known to rely on spurious features—patterns which are correlated with
the target on the training data but not inherently relevant to the learning problem, such as the …

On the need for a language describing distribution shifts: Illustrations on tabular datasets

J Liu, T Wang, P Cui… - Advances in Neural …, 2024 - proceedings.neurips.cc
Different distribution shifts require different algorithmic and operational interventions.
Methodological research must be grounded by the specific shifts they address. Although …

A whac-a-mole dilemma: Shortcuts come in multiples where mitigating one amplifies others

Z Li, I Evtimov, A Gordo, C Hazirbas… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Machine learning models have been found to learn shortcuts---unintended decision
rules that are unable to generalize---undermining models' reliability. Previous works address …

Invariant feature learning for generalized long-tailed classification

K Tang, M Tao, J Qi, Z Liu, H Zhang - European Conference on Computer …, 2022 - Springer
Existing long-tailed classification (LT) methods only focus on tackling the class-wise
imbalance that head classes have more samples than tail classes, but overlook the attribute …

Change is hard: A closer look at subpopulation shift

Y Yang, H Zhang, D Katabi, M Ghassemi - arXiv preprint arXiv:2302.12254, 2023 - arxiv.org
Machine learning models often perform poorly on subgroups that are underrepresented in
the training data. Yet, little is understood on the variation in mechanisms that cause …

Towards last-layer retraining for group robustness with fewer annotations

T LaBonte, V Muthukumar… - Advances in Neural …, 2024 - proceedings.neurips.cc
Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious
correlations and poor generalization on minority groups. The recent deep feature …

Simple and fast group robustness by automatic feature reweighting

S Qiu, A Potapczynski, P Izmailov… - … on Machine Learning, 2023 - proceedings.mlr.press
A major challenge to out-of-distribution generalization is reliance on spurious features—
patterns that are predictive of the class label in the training data distribution, but not causally …

Id and ood performance are sometimes inversely correlated on real-world datasets

D Teney, Y Lin, SJ Oh… - Advances in Neural …, 2024 - proceedings.neurips.cc
Several studies have compared the in-distribution (ID) and out-of-distribution (OOD)
performance of models in computer vision and NLP. They report a frequent positive …