Simple data balancing achieves competitive worst-group-accuracy

BY Idrissi, M Arjovsky, M Pezeshki… - … on Causal Learning …, 2022 - proceedings.mlr.press
We study the problem of learning classifiers that perform well across (known or unknown)
groups of data. After observing that common worst-group-accuracy datasets suffer from …

Spread spurious attribute: Improving worst-group accuracy with spurious attribute estimation

J Nam, J Kim, J Lee, J Shin - arXiv preprint arXiv:2204.02070, 2022 - arxiv.org
The paradigm of worst-group loss minimization has shown its promise in avoiding to learn
spurious correlations, but requires costly additional supervision on spurious attributes. To …

Learning debiased classifier with biased committee

N Kim, S Hwang, S Ahn, J Park… - Advances in Neural …, 2022 - proceedings.neurips.cc
Neural networks are prone to be biased towards spurious correlations between classes and
latent attributes exhibited in a major portion of training data, which ruins their generalization …

Learning from failure: De-biasing classifier from biased classifier

J Nam, H Cha, S Ahn, J Lee… - Advances in Neural …, 2020 - proceedings.neurips.cc
Neural networks often learn to make predictions that overly rely on spurious corre-lation
existing in the dataset, which causes the model to be biased. While previous work tackles …

Omnifair: A declarative system for model-agnostic group fairness in machine learning

H Zhang, X Chu, A Asudeh, SB Navathe - Proceedings of the 2021 …, 2021 - dl.acm.org
Machine learning (ML) is increasingly being used to make decisions in our society. ML
models, however, can be unfair to certain demographic groups (eg, African Americans or …

A systematic study of bias amplification

M Hall, L van der Maaten, L Gustafson, M Jones… - arXiv preprint arXiv …, 2022 - arxiv.org
Recent research suggests that predictions made by machine-learning models can amplify
biases present in the training data. When a model amplifies bias, it makes certain …

Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization

S Sagawa, PW Koh, TB Hashimoto, P Liang - arXiv preprint arXiv …, 2019 - arxiv.org
Overparameterized neural networks can be highly accurate on average on an iid test set yet
consistently fail on atypical groups of the data (eg, by learning spurious correlations that …

Training well-generalizing classifiers for fairness metrics and other data-dependent constraints

A Cotter, M Gupta, H Jiang, N Srebro… - International …, 2019 - proceedings.mlr.press
Classifiers can be trained with data-dependent constraints to satisfy fairness goals, reduce
churn, achieve a targeted false positive rate, or other policy goals. We study the …

Active sampling for min-max fairness

J Abernethy, P Awasthi, M Kleindessner… - arXiv preprint arXiv …, 2020 - arxiv.org
We propose simple active sampling and reweighting strategies for optimizing min-max
fairness that can be applied to any classification or regression model learned via loss …

Escaping saddle points for effective generalization on class-imbalanced data

H Rangwani, SK Aithal… - Advances in Neural …, 2022 - proceedings.neurips.cc
Real-world datasets exhibit imbalances of varying types and degrees. Several techniques
based on re-weighting and margin adjustment of loss are often used to enhance the …