A fine-grained analysis on distribution shift

O Wiles, S Gowal, F Stimberg, S Alvise-Rebuffi… - arXiv preprint arXiv …, 2021 - arxiv.org
Robustness to distribution shifts is critical for deploying machine learning models in the real
world. Despite this necessity, there has been little work in defining the underlying …

Wilds: A benchmark of in-the-wild distribution shifts

PW Koh, S Sagawa, H Marklund… - International …, 2021 - proceedings.mlr.press
Distribution shifts—where the training distribution differs from the test distribution—can
substantially degrade the accuracy of machine learning (ML) systems deployed in the wild …

On the need for a language describing distribution shifts: Illustrations on tabular datasets

J Liu, T Wang, P Cui… - Advances in Neural …, 2024 - proceedings.neurips.cc
Different distribution shifts require different algorithmic and operational interventions.
Methodological research must be grounded by the specific shifts they address. Although …

Bayesian invariant risk minimization

Y Lin, H Dong, H Wang… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Generalization under distributional shift is an open challenge for machine learning. Invariant
Risk Minimization (IRM) is a promising framework to tackle this issue by extracting invariant …

Breeds: Benchmarks for subpopulation shift

S Santurkar, D Tsipras, A Madry - arXiv preprint arXiv:2008.04859, 2020 - arxiv.org
We develop a methodology for assessing the robustness of models to subpopulation shift---
specifically, their ability to generalize to novel data subpopulations that were not observed …

Metashift: A dataset of datasets for evaluating contextual distribution shifts and training conflicts

W Liang, J Zou - arXiv preprint arXiv:2202.06523, 2022 - arxiv.org
Understanding the performance of machine learning models across diverse data
distributions is critically important for reliable applications. Motivated by this, there is a …

Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift

A Kumar, T Ma, P Liang… - Uncertainty in Artificial …, 2022 - proceedings.mlr.press
We often see undesirable tradeoffs in robust machine learning where out-of-distribution
(OOD) accuracy is at odds with in-distribution (ID) accuracy. A robust classifier obtained via …

[PDF][PDF] Adaptive risk minimization: A meta-learning approach for tackling group shift

M Zhang, H Marklund, A Gupta… - arXiv preprint arXiv …, 2020 - marwandebbiche.github.io
A fundamental assumption of most machine learning algorithms is that the training and test
data are drawn from the same underlying distribution. However, this assumption is violated …

Examining and combating spurious features under distribution shift

C Zhou, X Ma, P Michel… - … Conference on Machine …, 2021 - proceedings.mlr.press
A central goal of machine learning is to learn robust representations that capture the
fundamental relationship between inputs and output labels. However, minimizing training …

The many faces of robustness: A critical analysis of out-of-distribution generalization

D Hendrycks, S Basart, N Mu… - Proceedings of the …, 2021 - openaccess.thecvf.com
We introduce four new real-world distribution shift datasets consisting of changes in image
style, image blurriness, geographic location, camera operation, and more. With our new …