How to dp-fy ml: A practical guide to machine learning with differential privacy

N Ponomareva, H Hazimeh, A Kurakin, Z Xu… - Journal of Artificial …, 2023 - jair.org
Abstract Machine Learning (ML) models are ubiquitous in real-world applications and are a
constant focus of research. Modern ML models have become more complex, deeper, and …

Enhanced membership inference attacks against machine learning models

J Ye, A Maddi, SK Murakonda… - Proceedings of the …, 2022 - dl.acm.org
How much does a machine learning algorithm leak about its training data, and why?
Membership inference attacks are used as an auditing tool to quantify this leakage. In this …

On the privacy risks of algorithmic fairness

H Chang, R Shokri - 2021 IEEE European Symposium on …, 2021 - ieeexplore.ieee.org
Algorithmic fairness and privacy are essential pillars of trustworthy machine learning. Fair
machine learning aims at minimizing discrimination against protected groups by, for …

Linkteller: Recovering private edges from graph neural networks via influence analysis

F Wu, Y Long, C Zhang, B Li - 2022 ieee symposium on …, 2022 - ieeexplore.ieee.org
Graph structured data have enabled several successful applications such as
recommendation systems and traffic prediction, given the rich node features and edges …

SoK: Let the privacy games begin! A unified treatment of data inference privacy in machine learning

A Salem, G Cherubin, D Evans, B Köpf… - … IEEE Symposium on …, 2023 - ieeexplore.ieee.org
Deploying machine learning models in production may allow adversaries to infer sensitive
information about training data. There is a vast literature analyzing different types of …

Antipodes of label differential privacy: Pate and alibi

M Malek Esmaeili, I Mironov, K Prasad… - Advances in …, 2021 - proceedings.neurips.cc
We consider the privacy-preserving machine learning (ML) setting where the trained model
must satisfy differential privacy (DP) with respect to the labels of the training examples. We …

Bayesian estimation of differential privacy

S Zanella-Beguelin, L Wutschitz… - International …, 2023 - proceedings.mlr.press
Abstract Algorithms such as Differentially Private SGD enable training machine learning
models with formal privacy guarantees. However, because these guarantees hold with …

Analyzing privacy leakage in machine learning via multiple hypothesis testing: A lesson from fano

C Guo, A Sablayrolles… - … Conference on Machine …, 2023 - proceedings.mlr.press
Differential privacy (DP) is by far the most widely accepted framework for mitigating privacy
risks in machine learning. However, exactly how small the privacy parameter $\epsilon …

Membership Inference Attacks and Defenses in Federated Learning: A Survey

L Bai, H Hu, Q Ye, H Li, L Wang, J Xu - ACM Computing Surveys, 2024 - dl.acm.org
Federated learning is a decentralized machine learning approach where clients train
models locally and share model updates to develop a global model. This enables low …

Statistically valid inferences from privacy-protected data

G Evans, G King, M Schwenzfeier… - … Political Science Review, 2023 - cambridge.org
Unprecedented quantities of data that could help social scientists understand and
ameliorate the challenges of human society are presently locked away inside companies …