Fully-adaptive composition in differential privacy

J Whitehouse, A Ramdas… - … on Machine Learning, 2023 - proceedings.mlr.press
Composition is a key feature of differential privacy. Well-known advanced composition
theorems allow one to query a private database quadratically more times than basic privacy …

Individual privacy accounting for differentially private stochastic gradient descent

D Yu, G Kamath, J Kulkarni, TY Liu, J Yin… - arXiv preprint arXiv …, 2022 - arxiv.org
Differentially private stochastic gradient descent (DP-SGD) is the workhorse algorithm for
recent advances in private deep learning. It provides a single privacy guarantee to all …

Optimizing Noise for f-Differential Privacy via Anti-Concentration and Stochastic Dominance

J Awan, A Ramasethu - Journal of Machine Learning Research, 2024 - jmlr.org
In this paper, we establish anti-concentration inequalities for additive noise mechanisms
which achieve $ f $-differential privacy ($ f $-DP), a notion of privacy phrased in terms of a …

Gradients look alike: Sensitivity is often overestimated in {DP-SGD}

A Thudi, H Jia, C Meehan, I Shumailov… - 33rd USENIX Security …, 2024 - usenix.org
Differentially private stochastic gradient descent (DP-SGD) is the canonical approach to
private deep learning. While the current privacy analysis of DP-SGD is known to be tight in …

Privacy Amplification for the Gaussian Mechanism via Bounded Support

S Hu, S Mahloujifar, V Smith, K Chaudhuri… - arXiv preprint arXiv …, 2024 - arxiv.org
Data-dependent privacy accounting frameworks such as per-instance differential privacy
(pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for …

S-BDT: Distributed Differentially Private Boosted Decision Trees

T Peinemann, M Kirschte, J Stock, C Cotrini… - Proceedings of the …, 2024 - dl.acm.org
We introduce S-BDT: a novel (ε, δ)-differentially private distributed gradient boosted decision
tree (GBDT) learner that improves the protection of single training data points (privacy) while …

SoK: Memorisation in machine learning

D Usynin, M Knolle, G Kaissis - arXiv preprint arXiv:2311.03075, 2023 - arxiv.org
Quantifying the impact of individual data samples on machine learning models is an open
research problem. This is particularly relevant when complex and high-dimensional …

Understanding Practical Membership Privacy of Deep Learning

M Tobaben, G Pradhan, Y He, J Jälkö… - arXiv preprint arXiv …, 2024 - arxiv.org
We apply a state-of-the-art membership inference attack (MIA) to systematically test the
practical privacy vulnerability of fine-tuning large image classification models. We focus on …

Personalized DP-SGD using Sampling Mechanisms

G Heo, J Seo, SE Whang - arXiv preprint arXiv:2305.15165, 2023 - arxiv.org
Personalized privacy becomes critical in deep learning for Trustworthy AI. While
Differentially Private Stochastic Gradient Descent (DP-SGD) is widely used in deep learning …

Free Record-Level Privacy Risk Evaluation Through Artifact-Based Methods

J Pollock, I Shilov, E Dodd, YA de Montjoye - arXiv preprint arXiv …, 2024 - arxiv.org
Membership inference attacks (MIAs) are widely used to empirically assess the privacy risks
of samples used to train a target machine learning model. State-of-the-art methods however …