Privacy auditing with one (1) training run

T Steinke, M Nasr, M Jagielski - Advances in Neural …, 2024 - proceedings.neurips.cc
We propose a scheme for auditing differentially private machine learning systems with a
single training run. This exploits the parallelism of being able to add or remove multiple …

An empirical study of rich subgroup fairness for machine learning

M Kearns, S Neel, A Roth, ZS Wu - Proceedings of the conference on …, 2019 - dl.acm.org
Kearns, Neel, Roth, and Wu [ICML 2018] recently proposed a notion of rich subgroup
fairness intended to bridge the gap between statistical and individual notions of fairness …

Sample complexity bounds for differentially private learning

K Chaudhuri, D Hsu - … of the 24th Annual Conference on …, 2011 - proceedings.mlr.press
This work studies the problem of privacy-preserving classification–namely, learning a
classifier from sensitive data while preserving the privacy of individuals in the training set. In …

Antipodes of label differential privacy: PATE and ALIBI

M Malek, I Mironov, K Prasad, I Shilov… - arXiv preprint arXiv …, 2021 - arxiv.org
We consider the privacy-preserving machine learning (ML) setting where the trained model
must satisfy differential privacy (DP) with respect to the labels of the training examples. We …

Differentially private and fair deep learning: A lagrangian dual approach

C Tran, F Fioretto, P Van Hentenryck - Proceedings of the AAAI …, 2021 - ojs.aaai.org
A critical concern in data-driven decision making is to build models whose outcomes do not
discriminate against some demographic groups, including gender, ethnicity, or age. To …

Adaptive learning with robust generalization guarantees

R Cummings, K Ligett, K Nissim… - … on Learning Theory, 2016 - proceedings.mlr.press
The traditional notion of\emphgeneralization—ie, learning a hypothesis whose empirical
error is close to its true error—is surprisingly brittle. As has recently been noted [Dwork et al …

Fair classification with adversarial perturbations

LE Celis, A Mehrotra, N Vishnoi - Advances in Neural …, 2021 - proceedings.neurips.cc
We study fair classification in the presence of an omniscient adversary that, given an $\eta $,
is allowed to choose an arbitrary $\eta $-fraction of the training samples and arbitrarily …

Making the shoe fit: Architectures, initializations, and tuning for learning with privacy

N Papernot, S Chien, S Song, A Thakurta, U Erlingsson - 2019 - openreview.net
Because learning sometimes involves sensitive data, standard machine-learning algorithms
have been extended to offer strong privacy guarantees for training data. However, in …

Privacy, accuracy, and model fairness trade-offs in federated learning

X Gu, Z Tianqing, J Li, T Zhang, W Ren, KKR Choo - Computers & Security, 2022 - Elsevier
As applications of machine learning become increasingly widespread, the need to ensure
model accuracy and fairness while protecting the privacy of user data becomes more …

Model explanations with differential privacy

N Patel, R Shokri, Y Zick - Proceedings of the 2022 ACM Conference on …, 2022 - dl.acm.org
Using machine learning models in critical decision-making processes has given rise to a call
for algorithmic transparency. Model explanations, however, might leak information about the …