Differentially private fair learning

M Jagielski, M Kearns, J Mao, A Oprea… - International …, 2019 - proceedings.mlr.press
Motivated by settings in which predictive models may be required to be non-discriminatory
with respect to certain attributes (such as race), but even collecting the sensitive attribute …

Fair learning with private demographic data

H Mozannar, M Ohannessian… - … Conference on Machine …, 2020 - proceedings.mlr.press
Sensitive attributes such as race are rarely available to learners in real world settings as
their collection is often restricted by laws and regulations. We give a scheme that allows …

How unfair is private learning?

A Sanyal, Y Hu, F Yang - Uncertainty in Artificial Intelligence, 2022 - proceedings.mlr.press
As machine learning algorithms are deployed on sensitive data in critical decision making
processes, it is becoming increasingly important that they are also private and fair. In this …

Differential privacy has bounded impact on fairness in classification

P Mangold, M Perrot, A Bellet… - … on Machine Learning, 2023 - proceedings.mlr.press
We theoretically study the impact of differential privacy on fairness in classification. We prove
that, given a class of models, popular group fairness measures are pointwise Lipschitz …

Differential privacy and machine learning: a survey and review

Z Ji, ZC Lipton, C Elkan - arXiv preprint arXiv:1412.7584, 2014 - arxiv.org
The objective of machine learning is to extract useful information from data, while privacy is
preserved by concealing information. Thus it seems hard to reconcile these competing …

Bounding user contributions: A bias-variance trade-off in differential privacy

K Amin, A Kulesza, A Munoz… - … on Machine Learning, 2019 - proceedings.mlr.press
Differentially private learning algorithms protect individual participants in the training dataset
by guaranteeing that their presence does not significantly change the resulting model. In …

On the compatibility of privacy and fairness

R Cummings, V Gupta, D Kimpara… - Adjunct publication of the …, 2019 - dl.acm.org
In this work, we investigate whether privacy and fairness can be simultaneously achieved by
a single classifier in several different models. Some of the earliest work on fairness in …

Differentially private and fair classification via calibrated functional mechanism

J Ding, X Zhang, X Li, J Wang, R Yu, M Pan - Proceedings of the AAAI …, 2020 - aaai.org
Abstract Machine learning is increasingly becoming a powerful tool to make decisions in a
wide variety of applications, such as medical diagnosis and autonomous driving. Privacy …

Differentially private empirical risk minimization under the fairness lens

C Tran, M Dinh, F Fioretto - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Differential Privacy (DP) is an important privacy-enhancing technology for private machine
learning systems. It allows to measure and bound the risk associated with an individual …

[PDF][PDF] Differentially private empirical risk minimization.

K Chaudhuri, C Monteleoni, AD Sarwate - Journal of Machine Learning …, 2011 - jmlr.org
Privacy-preserving machine learning algorithms are crucial for the increasingly common
setting in which personal data, such as medical or financial records, are analyzed. We …