Standard training via empirical risk minimization (ERM) can produce models that achieve low error on average but high error on minority groups, especially in the presence of …
Distributionally robust optimization (DRO) can improve the robustness and fairness of learning methods. In this paper, we devise stochastic algorithms for a class of DRO …
Models trained via empirical risk minimization (ERM) are known to rely on spurious correlations between labels and task-independent input features, resulting in poor …
B Gao, H Gouk, Y Yang… - … Conference on Machine …, 2022 - proceedings.mlr.press
Generalising robustly to distribution shift is a major challenge that is pervasive across most real-world applications of machine learning. A recent study highlighted that many advanced …
While neural networks have shown remarkable success on classification tasks in terms of average-case performance, they often fail to perform well on certain groups of the data. Such …
C Zhou, X Ma, P Michel… - … Conference on Machine …, 2021 - proceedings.mlr.press
A central goal of machine learning is to learn robust representations that capture the fundamental relationship between inputs and output labels. However, minimizing training …
A Rame, C Dancette, M Cord - International Conference on …, 2022 - proceedings.mlr.press
Learning robust models that generalize well under changes in the data distribution is critical for real-world applications. To this end, there has been a growing surge of interest to learn …
R Zhai, C Dan, Z Kolter… - … Conference on Machine …, 2021 - proceedings.mlr.press
Many machine learning tasks involve subpopulation shift where the testing data distribution is a subpopulation of the training distribution. For such settings, a line of recent work has …
This paper investigates group distributionally robust optimization (GDRO), with the purpose to learn a model that performs well over $ m $ different distributions. First, we formulate …