A classification of feedback loops and their relation to biases in automated decision-making systems

N Pagan, J Baumann, E Elokda… - Proceedings of the 3rd …, 2023 - dl.acm.org
Prediction-based decision-making systems are becoming increasingly prevalent in various
domains. Previous studies have demonstrated that such systems are vulnerable to runaway …

Bias on demand: a modelling framework that generates synthetic data with bias

J Baumann, A Castelnovo, R Crupi… - Proceedings of the …, 2023 - dl.acm.org
Nowadays, Machine Learning (ML) systems are widely used in various businesses and are
increasingly being adopted to make decisions that can significantly impact people's lives …

To be forgotten or to be fair: Unveiling fairness implications of machine unlearning methods

D Zhang, S Pan, T Hoang, Z Xing, M Staples, X Xu… - AI and Ethics, 2024 - Springer
The right to be forgotten (RTBF) allows individuals to request the removal of personal
information from online platforms. Researchers have proposed machine unlearning …

Fairness and risk: an ethical argument for a group fairness definition insurers can use

J Baumann, M Loi - Philosophy & Technology, 2023 - Springer
Algorithmic predictions are promising for insurance companies to develop personalized risk
models for determining premiums. In this context, issues of fairness, discrimination, and …

Consensus and subjectivity of skin tone annotation for ml fairness

C Schumann, F Olanubi, A Wright… - Advances in …, 2024 - proceedings.neurips.cc
Understanding different human attributes and how they affect model behavior may become
a standard need for all model creation and usage, from traditional computer vision tasks to …

Perspectives of Defining Algorithmic Fairness in Customer-oriented Applications: A Systematic Literature Review.

M Maw, SC Haw, KW Ng - International Journal on …, 2024 - search.ebscohost.com
Automated decision-making systems are massively engaged in different types of
businesses, including customer-oriented sectors, and bring countless achievements in …

Deceptive fairness attacks on graphs via meta learning

J Kang, Y Xia, R Maciejewski, J Luo, H Tong - arXiv preprint arXiv …, 2023 - arxiv.org
We study deceptive fairness attacks on graphs to answer the following question: How can
we achieve poisoning attacks on a graph learning model to exacerbate the bias …

Distributive justice as the foundational premise of fair ML: Unification, extension, and interpretation of group fairness metrics

J Baumann, C Hertweck, M Loi, C Heitz - arXiv preprint arXiv:2206.02897, 2022 - arxiv.org
Group fairness metrics are an established way of assessing the fairness of prediction-based
decision-making systems. However, these metrics are still insufficiently linked to …

FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods

X Han, J Chi, Y Chen, Q Wang, H Zhao, N Zou… - arXiv preprint arXiv …, 2023 - arxiv.org
This paper introduces the Fair Fairness Benchmark (\textsf {FFB}), a benchmarking
framework for in-processing group fairness methods. Ensuring fairness in machine learning …

Group fairness in prediction-based decision making: From moral assessment to implementation

J Baumann, C Heitz - 2022 9th Swiss Conference on Data …, 2022 - ieeexplore.ieee.org
Ensuring fairness of prediction-based decision making is based on statistical group fairness
criteria. Which one of these criteria is the morally most appropriate one depends on the …