Contrastive adapters for foundation model group robustness

M Zhang, C Ré - Advances in Neural Information …, 2022 - proceedings.neurips.cc
While large pretrained foundation models (FMs) have shown remarkable zero-shot
classification robustness to dataset-level distribution shifts, their robustness to subpopulation …

Selecmix: Debiased learning by contradicting-pair sampling

I Hwang, S Lee, Y Kwak, SJ Oh… - Advances in …, 2022 - proceedings.neurips.cc
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended
decision rules, in particular when their training data is biased, ie, when training labels are …

Robustness to subpopulation shift with domain label noise via regularized annotation of domains

N Stromberg, R Ayyagari, M Welfert, S Koyejo… - arXiv preprint arXiv …, 2024 - arxiv.org
Existing methods for last layer retraining that aim to optimize worst-group accuracy (WGA)
rely heavily on well-annotated groups in the training data. We show, both in theory and …

Theoretical Guarantees of Data Augmented Last Layer Retraining Methods

M Welfert, N Stromberg, L Sankar - arXiv preprint arXiv:2405.05934, 2024 - arxiv.org
Ensuring fair predictions across many distinct subpopulations in the training data can be
prohibitive for large models. Recently, simple linear last layer retraining strategies, in …

For Robust Worst-Group Accuracy, Ignore Group Annotations

N Stromberg, R Ayyagari, M Welfert, S Koyejo… - … on Machine Learning … - openreview.net
Existing methods for last layer retraining that aim to optimize worst-group accuracy (WGA)
rely heavily on well-annotated groups in the training data. We show, both in theory and …

Improving Fairness of Pretrained Models in the Absence of Domain Annotations

R Ayyagari - 2024 - search.proquest.com
Last-layer retraining methods form a versatile and efficient class of corrections to improve
the fairness of upstream models. These methods use domain annotations and target labels …