A comparative study of fairness-enhancing interventions in machine learning

SA Friedler, C Scheidegger… - Proceedings of the …, 2019 - dl.acm.org
Computers are increasingly used to make decisions that have significant impact on people's
lives. Often, these predictions can affect different population subgroups disproportionately …

Conditional learning of fair representations

H Zhao, A Coston, T Adel, GJ Gordon - arXiv preprint arXiv:1910.07162, 2019 - arxiv.org
We propose a novel algorithm for learning fair representations that can simultaneously
mitigate two notions of disparity among different demographic subgroups in the classification …

Improving fairness of artificial intelligence algorithms in privileged-group selection bias data settings

D Pessach, E Shmueli - Expert Systems with Applications, 2021 - Elsevier
An increasing number of decisions regarding the daily lives of human beings are being
controlled by artificial intelligence (AI) algorithms. Since they now touch on many aspects of …

Omnifair: A declarative system for model-agnostic group fairness in machine learning

H Zhang, X Chu, A Asudeh, SB Navathe - Proceedings of the 2021 …, 2021 - dl.acm.org
Machine learning (ML) is increasingly being used to make decisions in our society. ML
models, however, can be unfair to certain demographic groups (eg, African Americans or …

[PDF][PDF] Generalized demographic parity for group fairness

Z Jiang, X Han, C Fan, F Yang, A Mostafavi… - … Conference on Learning …, 2022 - par.nsf.gov
This work aims to generalize demographic parity to continuous sensitive attributes while
preserving tractable computation. Current fairness metrics for continuous sensitive attributes …

On the impact of machine learning randomness on group fairness

P Ganesh, H Chang, M Strobel, R Shokri - Proceedings of the 2023 ACM …, 2023 - dl.acm.org
Statistical measures for group fairness in machine learning reflect the gap in performance of
algorithms across different groups. These measures, however, exhibit a high variance …

On the privacy risks of algorithmic fairness

H Chang, R Shokri - 2021 IEEE European Symposium on …, 2021 - ieeexplore.ieee.org
Algorithmic fairness and privacy are essential pillars of trustworthy machine learning. Fair
machine learning aims at minimizing discrimination against protected groups by, for …

Decoupled classifiers for group-fair and efficient machine learning

C Dwork, N Immorlica, AT Kalai… - Conference on …, 2018 - proceedings.mlr.press
When it is ethical and legal to use a sensitive attribute (such as gender or race) in machine
learning systems, the question remains how to do so. We show that the naive application of …

Practical privacy-preserving k-means clustering

P Mohassel, M Rosulek, N Trieu - Proceedings on privacy …, 2020 - petsymposium.org
Clustering is a common technique for data analysis, which aims to partition data into similar
groups. When the data comes from different sources, it is highly desirable to maintain the …

[PDF][PDF] Fnnc: Achieving fairness through neural networks

M Padala, S Gujar - … of the Twenty-Ninth International Joint …, 2020 - scholar.archive.org
In classification models, fairness can be ensured by solving a constrained optimization
problem. We focus on fairness constraints like Disparate Impact, Demographic Parity, and …