Privacy auditing with one (1) training run

T Steinke, M Nasr, M Jagielski - Advances in Neural …, 2024 - proceedings.neurips.cc
We propose a scheme for auditing differentially private machine learning systems with a
single training run. This exploits the parallelism of being able to add or remove multiple …

Dp-forward: Fine-tuning and inference on language models with differential privacy in forward pass

M Du, X Yue, SSM Chow, T Wang, C Huang… - Proceedings of the 2023 …, 2023 - dl.acm.org
Differentially private stochastic gradient descent (DP-SGD) adds noise to gradients in back-
propagation, safeguarding training data from privacy leakage, particularly membership …

One-shot empirical privacy estimation for federated learning

G Andrew, P Kairouz, S Oh, A Oprea… - arXiv preprint arXiv …, 2023 - arxiv.org
Privacy estimation techniques for differentially private (DP) algorithms are useful for
comparing against analytical bounds, or to empirically measure privacy loss in settings …

On differentially private federated linear contextual bandits

X Zhou, SR Chowdhury - arXiv preprint arXiv:2302.13945, 2023 - arxiv.org
We consider cross-silo federated linear contextual bandit (LCB) problem under differential
privacy, where multiple silos (agents) interact with the local users and communicate via a …

Practical differentially private hyperparameter tuning with subsampling

A Koskela, TD Kulkarni - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Tuning the hyperparameters of differentially private (DP) machine learning (ML) algorithms
often requires use of sensitive data and this may leak private information via hyperparameter …

Efficient and near-optimal noise generation for streaming differential privacy

HB McMahan, K Pillutla, T Steinke… - arXiv preprint arXiv …, 2024 - arxiv.org
In the task of differentially private (DP) continual counting, we receive a stream of increments
and our goal is to output an approximate running total of these increments, without revealing …

Privacy amplification for matrix mechanisms

CA Choquette-Choo, A Ganesh, T Steinke… - arXiv preprint arXiv …, 2023 - arxiv.org
Privacy amplification exploits randomness in data selection to provide tighter differential
privacy (DP) guarantees. This analysis is key to DP-SGD's success in machine learning, but …

Subsampling suffices for adaptive data analysis

G Blanc - Proceedings of the 55th Annual ACM Symposium on …, 2023 - dl.acm.org
Ensuring that analyses performed on a dataset are representative of the entire population is
one of the central problems in statistics. Most classical techniques assume that the dataset is …

Privacy-Preserving Instructions for Aligning Large Language Models

D Yu, P Kairouz, S Oh, Z Xu - arXiv preprint arXiv:2402.13659, 2024 - arxiv.org
Service providers of large language model (LLM) applications collect user instructions in the
wild and use them in further aligning LLMs with users' intentions. These instructions, which …

Grafting Laplace and Gaussian distributions: A new noise mechanism for differential privacy

G Muthukrishnan, S Kalyani - IEEE Transactions on Information …, 2023 - ieeexplore.ieee.org
The framework of differential privacy protects an individual's privacy while publishing query
responses on congregated data. In this work, a new noise addition mechanism for …