Bias mitigation for machine learning classifiers: A comprehensive survey

M Hort, Z Chen, JM Zhang, M Harman… - ACM Journal on …, 2024 - dl.acm.org
This article provides a comprehensive survey of bias mitigation methods for achieving
fairness in Machine Learning (ML) models. We collect a total of 341 publications concerning …

[HTML][HTML] Bias and unfairness in machine learning models: a systematic review on datasets, tools, fairness metrics, and identification and mitigation methods

TP Pagano, RB Loureiro, FVN Lisboa… - Big data and cognitive …, 2023 - mdpi.com
One of the difficulties of artificial intelligence is to ensure that model decisions are fair and
free of bias. In research, datasets, metrics, techniques, and tools are applied to detect and …

In-processing modeling techniques for machine learning fairness: A survey

M Wan, D Zha, N Liu, N Zou - ACM Transactions on Knowledge …, 2023 - dl.acm.org
Machine learning models are becoming pervasive in high-stakes applications. Despite their
clear benefits in terms of performance, the models could show discrimination against …

[HTML][HTML] An adversarial training framework for mitigating algorithmic biases in clinical machine learning

J Yang, AAS Soltan, DW Eyre, Y Yang… - NPJ digital medicine, 2023 - nature.com
Abstract Machine learning is becoming increasingly prominent in healthcare. Although its
benefits are clear, growing attention is being given to how these tools may exacerbate …

Fair graph distillation

Q Feng, ZS Jiang, R Li, Y Wang… - Advances in Neural …, 2024 - proceedings.neurips.cc
As graph neural networks (GNNs) struggle with large-scale graphs due to high
computational demands, data distillation for graph data promises to alleviate this issue by …

Setting the trap: Capturing and defeating backdoors in pretrained language models through honeypots

RR Tang, J Yuan, Y Li, Z Liu… - Advances in Neural …, 2023 - proceedings.neurips.cc
In the field of natural language processing, the prevalent approach involves fine-tuning
pretrained language models (PLMs) using local samples. Recent research has exposed the …

Fairprune: Achieving fairness through pruning for dermatological disease diagnosis

Y Wu, D Zeng, X Xu, Y Shi, J Hu - International Conference on Medical …, 2022 - Springer
Many works have shown that deep learning-based medical image classification models can
exhibit bias toward certain demographic attributes like race, gender, and age. Existing bias …

Fmp: Toward fair graph message passing against topology bias

Z Jiang, X Han, C Fan, Z Liu, N Zou… - arXiv preprint arXiv …, 2022 - arxiv.org
Despite recent advances in achieving fair representations and predictions through
regularization, adversarial debiasing, and contrastive learning in graph neural networks …

How unfair is private learning?

A Sanyal, Y Hu, F Yang - Uncertainty in Artificial Intelligence, 2022 - proceedings.mlr.press
As machine learning algorithms are deployed on sensitive data in critical decision making
processes, it is becoming increasingly important that they are also private and fair. In this …

Weak proxies are sufficient and preferable for fairness with missing sensitive attributes

Z Zhu, Y Yao, J Sun, H Li, Y Liu - … Conference on Machine …, 2023 - proceedings.mlr.press
Evaluating fairness can be challenging in practice because the sensitive attributes of data
are often inaccessible due to privacy constraints. The go-to approach that the industry …