Subsampled rényi differential privacy and analytical moments accountant

YX Wang, B Balle… - The 22nd International …, 2019 - proceedings.mlr.press
We study the problem of subsampling in differential privacy (DP), a question that is the
centerpiece behind many successful differentially private machine learning algorithms …

Private federated learning without a trusted server: Optimal algorithms for convex losses

A Lowy, M Razaviyayn - arXiv preprint arXiv:2106.09779, 2021 - arxiv.org
This paper studies federated learning (FL)--especially cross-silo FL--with data from people
who do not trust the server or other silos. In this setting, each silo (eg hospital) has data from …

Robust and private Bayesian inference

C Dimitrakakis, B Nelson, A Mitrokotsa… - … Learning Theory: 25th …, 2014 - Springer
We examine the robustness and privacy of Bayesian inference, under assumptions on the
prior, and with no modifications to the Bayesian framework. First, we generalise the concept …

Three variants of differential privacy: Lossless conversion and applications

S Asoodeh, J Liao, FP Calmon… - IEEE Journal on …, 2021 - ieeexplore.ieee.org
We consider three different variants of differential privacy (DP), namely approximate DP,
Rényi DP (RDP), and hypothesis test DP. In the first part, we develop a machinery for …

Bayesian differential privacy for machine learning

A Triastcyn, B Faltings - International Conference on …, 2020 - proceedings.mlr.press
Traditional differential privacy is independent of the data distribution. However, this is not
well-matched with the modern machine learning context, where models are trained on …

Tempered sigmoid activations for deep learning with differential privacy

N Papernot, A Thakurta, S Song, S Chien… - Proceedings of the …, 2021 - ojs.aaai.org
Because learning sometimes involves sensitive data, machine learning algorithms have
been extended to offer differential privacy for training data. In practice, this has been mostly …

Renyi differential privacy of propose-test-release and applications to private and robust machine learning

JT Wang, S Mahloujifar, S Wang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Abstract Propose-Test-Release (PTR) is a differential privacy framework that works with
local sensitivity of functions, instead of their global sensitivity. This framework is typically …

User-level private learning via correlated sampling

B Ghazi, R Kumar, P Manurangsi - arXiv preprint arXiv:2110.11208, 2021 - arxiv.org
Most works in learning with differential privacy (DP) have focused on the setting where each
user has a single sample. In this work, we consider the setting where each user holds $ m …

Sample-efficient proper PAC learning with approximate differential privacy

B Ghazi, N Golowich, R Kumar… - Proceedings of the 53rd …, 2021 - dl.acm.org
In this paper we prove that the sample complexity of properly learning a class of Littlestone
dimension d with approximate differential privacy is Õ (d 6), ignoring privacy and accuracy …

Randomized quantization is all you need for differential privacy in federated learning

Y Youn, Z Hu, J Ziani, J Abernethy - arXiv preprint arXiv:2306.11913, 2023 - arxiv.org
Federated learning (FL) is a common and practical framework for learning a machine model
in a decentralized fashion. A primary motivation behind this decentralized approach is data …