Are we learning yet? a meta review of evaluation failures across machine learning

T Liao, R Taori, ID Raji, L Schmidt - Thirty-fifth Conference on …, 2021 - openreview.net
Many subfields of machine learning share a common stumbling block: evaluation. Advances
in machine learning often evaporate under closer scrutiny or turn out to be less widely …

Exposed! a survey of attacks on private data

C Dwork, A Smith, T Steinke… - Annual Review of …, 2017 - annualreviews.org
Privacy-preserving statistical data analysis addresses the general question of protecting
privacy when publicly releasing information about a sensitive dataset. A privacy attack takes …

Privacy auditing with one (1) training run

T Steinke, M Nasr, M Jagielski - Advances in Neural …, 2024 - proceedings.neurips.cc
We propose a scheme for auditing differentially private machine learning systems with a
single training run. This exploits the parallelism of being able to add or remove multiple …

Certified robustness to adversarial examples with differential privacy

M Lecuyer, V Atlidakis, R Geambasu… - … IEEE symposium on …, 2019 - ieeexplore.ieee.org
Adversarial examples that fool machine learning models, particularly deep neural networks,
have been a topic of intense research interest, with attacks and defenses being developed …

Privacy risk in machine learning: Analyzing the connection to overfitting

S Yeom, I Giacomelli, M Fredrikson… - 2018 IEEE 31st …, 2018 - ieeexplore.ieee.org
Machine learning algorithms, when applied to sensitive data, pose a distinct threat to
privacy. A growing body of prior work demonstrates that models produced by these …

Deep learning with differential privacy

M Abadi, A Chu, I Goodfellow, HB McMahan… - Proceedings of the …, 2016 - dl.acm.org
Machine learning techniques based on neural networks are achieving remarkable results in
a wide variety of domains. Often, the training of models requires large, representative …

Adaptive machine unlearning

V Gupta, C Jung, S Neel, A Roth… - Advances in …, 2021 - proceedings.neurips.cc
Data deletion algorithms aim to remove the influence of deleted data points from trained
models at a cheaper computational cost than fully retraining those models. However, for …

White-box vs black-box: Bayes optimal strategies for membership inference

A Sablayrolles, M Douze, C Schmid… - International …, 2019 - proceedings.mlr.press
Membership inference determines, given a sample and trained parameters of a machine
learning model, whether the sample was part of the training set. In this paper, we derive the …

Concentrated differential privacy: Simplifications, extensions, and lower bounds

M Bun, T Steinke - Theory of cryptography conference, 2016 - Springer
Abstract “Concentrated differential privacy” was recently introduced by Dwork and Rothblum
as a relaxation of differential privacy, which permits sharper analyses of many privacy …

Multicalibration: Calibration for the (computationally-identifiable) masses

U Hébert-Johnson, M Kim… - International …, 2018 - proceedings.mlr.press
We develop and study multicalibration as a new measure of fairness in machine learning
that aims to mitigate inadvertent or malicious discrimination that is introduced at training time …