Selecmix: Debiased learning by contradicting-pair sampling

I Hwang, S Lee, Y Kwak, SJ Oh… - Advances in …, 2022 - proceedings.neurips.cc
Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended
decision rules, in particular when their training data is biased, ie, when training labels are …

Unbiased supervised contrastive learning

CA Barbano, B Dufumier, E Tartaglione… - arXiv preprint arXiv …, 2022 - arxiv.org
Many datasets are biased, namely they contain easy-to-learn features that are highly
correlated with the target class only in the dataset but not in the true underlying distribution …

Unbiased classification through bias-contrastive and bias-balanced learning

Y Hong, E Yang - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Datasets for training machine learning models tend to be biased unless the data is collected
with complete care. In such a biased dataset, models are susceptible to making predictions …

Learning from failure: De-biasing classifier from biased classifier

J Nam, H Cha, S Ahn, J Lee… - Advances in Neural …, 2020 - proceedings.neurips.cc
Neural networks often learn to make predictions that overly rely on spurious corre-lation
existing in the dataset, which causes the model to be biased. While previous work tackles …

Over-training with mixup may hurt generalization

Z Liu, Z Wang, H Guo, Y Mao - arXiv preprint arXiv:2303.01475, 2023 - arxiv.org
Mixup, which creates synthetic training instances by linearly interpolating random sample
pairs, is a simple and yet effective regularization technique to boost the performance of deep …

Learning to reweight examples for robust deep learning

M Ren, W Zeng, B Yang… - … conference on machine …, 2018 - proceedings.mlr.press
Deep neural networks have been shown to be very powerful modeling tools for many
supervised learning tasks involving complex input patterns. However, they can also easily …

Learning de-biased representations with biased representations

H Bahng, S Chun, S Yun, J Choo… - … on Machine Learning, 2020 - proceedings.mlr.press
Many machine learning algorithms are trained and evaluated by splitting data from a single
source into training and test sets. While such focus on in-distribution learning scenarios has …

Towards assumption-free bias mitigation

CY Chang, YN Chuang, KH Lai, X Han, X Hu… - arXiv preprint arXiv …, 2023 - arxiv.org
Despite the impressive prediction ability, machine learning models show discrimination
towards certain demographics and suffer from unfair prediction behaviors. To alleviate the …

Towards a unified framework of contrastive learning for disentangled representations

S Matthes, Z Han, H Shen - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Contrastive learning has recently emerged as a promising approach for learning data
representations that discover and disentangle the explanatory factors of the data. Previous …

Learning to split for automatic bias detection

Y Bao, R Barzilay - arXiv preprint arXiv:2204.13749, 2022 - arxiv.org
Classifiers are biased when trained on biased datasets. As a remedy, we propose Learning
to Split (ls), an algorithm for automatic bias detection. Given a dataset with input-label pairs …