Focus on the common good: Group distributional robustness follows

V Piratla, P Netrapalli, S Sarawagi - arXiv preprint arXiv:2110.02619, 2021 - arxiv.org
We consider the problem of training a classification model with group annotated training
data. Recent work has established that, if there is distribution shift across different groups …

Just train twice: Improving group robustness without training group information

EZ Liu, B Haghgoo, AS Chen… - International …, 2021 - proceedings.mlr.press
Standard training via empirical risk minimization (ERM) can produce models that achieve
low error on average but high error on minority groups, especially in the presence of …

Optimal algorithms for group distributionally robust optimization and beyond

T Soma, K Gatmiry, S Jegelka - arXiv preprint arXiv:2212.13669, 2022 - arxiv.org
Distributionally robust optimization (DRO) can improve the robustness and fairness of
learning methods. In this paper, we devise stochastic algorithms for a class of DRO …

AGRO: Adversarial discovery of error-prone groups for robust optimization

B Paranjape, P Dasigi, V Srikumar… - arXiv preprint arXiv …, 2022 - arxiv.org
Models trained via empirical risk minimization (ERM) are known to rely on spurious
correlations between labels and task-independent input features, resulting in poor …

Loss function learning for domain generalization by implicit gradient

B Gao, H Gouk, Y Yang… - … Conference on Machine …, 2022 - proceedings.mlr.press
Generalising robustly to distribution shift is a major challenge that is pervasive across most
real-world applications of machine learning. A recent study highlighted that many advanced …

Barack: Partially supervised group robustness with guarantees

NS Sohoni, M Sanjabi, N Ballas, A Grover… - arXiv preprint arXiv …, 2021 - arxiv.org
While neural networks have shown remarkable success on classification tasks in terms of
average-case performance, they often fail to perform well on certain groups of the data. Such …

Examining and combating spurious features under distribution shift

C Zhou, X Ma, P Michel… - … Conference on Machine …, 2021 - proceedings.mlr.press
A central goal of machine learning is to learn robust representations that capture the
fundamental relationship between inputs and output labels. However, minimizing training …

Fishr: Invariant gradient variances for out-of-distribution generalization

A Rame, C Dancette, M Cord - International Conference on …, 2022 - proceedings.mlr.press
Learning robust models that generalize well under changes in the data distribution is critical
for real-world applications. To this end, there has been a growing surge of interest to learn …

Doro: Distributional and outlier robust optimization

R Zhai, C Dan, Z Kolter… - … Conference on Machine …, 2021 - proceedings.mlr.press
Many machine learning tasks involve subpopulation shift where the testing data distribution
is a subpopulation of the training distribution. For such settings, a line of recent work has …

Stochastic approximation approaches to group distributionally robust optimization

L Zhang, P Zhao, ZH Zhuang… - Advances in Neural …, 2024 - proceedings.neurips.cc
This paper investigates group distributionally robust optimization (GDRO), with the purpose
to learn a model that performs well over $ m $ different distributions. First, we formulate …