Scalable DP-SGD: Shuffling vs. poisson subsampling

L Chua, B Ghazi, P Kamath, R Kumar… - arXiv preprint arXiv …, 2024 - arxiv.org
We provide new lower bounds on the privacy guarantee of the multi-epoch Adaptive Batch
Linear Queries (ABLQ) mechanism with shuffled batch sampling, demonstrating substantial …

Towards efficient and scalable training of differentially private deep learning

SR Beltran, M Tobaben, J Jälkö, N Loppi… - arXiv preprint arXiv …, 2024 - arxiv.org
Differentially private stochastic gradient descent (DP-SGD) is the standard algorithm for
training machine learning models under differential privacy (DP). The most common DP …

Near exact privacy amplification for matrix mechanisms

CA Choquette-Choo, A Ganesh, S Haque… - arXiv preprint arXiv …, 2024 - arxiv.org
We study the problem of computing the privacy parameters for DP machine learning when
using privacy amplification via random batching and noise correlated across rounds via a …

Optimal Rates for DP-SCO with a Single Epoch and Large Batches

CA Choquette-Choo, A Ganesh, A Thakurta - arXiv preprint arXiv …, 2024 - arxiv.org
The most common algorithms for differentially private (DP) machine learning (ML) are all
based on stochastic gradient descent, for example, DP-SGD. These algorithms achieve DP …

Approximating Two-Layer ReLU Networks for Hidden State Analysis in Differential Privacy

A Koskela - arXiv preprint arXiv:2407.04884, 2024 - arxiv.org
The hidden state threat model of differential privacy (DP) assumes that the adversary has
access only to the final trained machine learning (ML) model, without seeing intermediate …