Bias mitigation for machine learning classifiers: A comprehensive survey

M Hort, Z Chen, JM Zhang, M Harman… - ACM Journal on …, 2023 - dl.acm.org
This paper provides a comprehensive survey of bias mitigation methods for achieving
fairness in Machine Learning (ML) models. We collect a total of 341 publications concerning …

Are my deep learning systems fair? An empirical study of fixed-seed training

S Qian, VH Pham, T Lutellier, Z Hu… - Advances in …, 2021 - proceedings.neurips.cc
Deep learning (DL) systems have been gaining popularity in critical tasks such as credit
evaluation and crime prediction. Such systems demand fairness. Recent work shows that DL …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arXiv preprint arXiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Post-processing for individual fairness

F Petersen, D Mukherjee, Y Sun… - Advances in Neural …, 2021 - proceedings.neurips.cc
Post-processing in algorithmic fairness is a versatile approach for correcting bias in ML
systems that are already used in production. The main appeal of post-processing is that it …

Two simple ways to learn individual fairness metrics from data

D Mukherjee, M Yurochkin… - … on Machine Learning, 2020 - proceedings.mlr.press
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the
drawbacks of group fairness. Despite its benefits, it depends on a task specific fair metric that …

Fmp: Toward fair graph message passing against topology bias

Z Jiang, X Han, C Fan, Z Liu, N Zou… - arXiv preprint arXiv …, 2022 - arxiv.org
Despite recent advances in achieving fair representations and predictions through
regularization, adversarial debiasing, and contrastive learning in graph neural networks …

Removing spurious features can hurt accuracy and affect groups disproportionately

F Khani, P Liang - Proceedings of the 2021 ACM conference on fairness …, 2021 - dl.acm.org
Spurious features interfere with the goal of obtaining robust models that perform well across
many groups within the population. A natural remedy is to remove such features from the …

Learning bias-invariant representation by cross-sample mutual information minimization

W Zhu, H Zheng, H Liao, W Li… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Deep learning algorithms mine knowledge from the training data and thus would likely
inherit the dataset's bias information. As a result, the obtained model would generalize …

[HTML][HTML] Algorithmic fairness datasets: the story so far

A Fabris, S Messina, G Silvello, GA Susto - Data Mining and Knowledge …, 2022 - Springer
Data-driven algorithms are studied and deployed in diverse domains to support critical
decisions, directly impacting people's well-being. As a result, a growing community of …

Domain Adaptation meets Individual Fairness. And they get along.

D Mukherjee, F Petersen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Many instances of algorithmic bias are caused by distributional shifts. For example, machine
learning (ML) models often perform worse on demographic groups that are …