Advances and open problems in federated learning

P Kairouz, HB McMahan, B Avent… - … and trends® in …, 2021 - nowpublishers.com
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …

The distributed discrete gaussian mechanism for federated learning with secure aggregation

P Kairouz, Z Liu, T Steinke - International Conference on …, 2021 - proceedings.mlr.press
We consider training models on private data that are distributed across user devices. To
ensure privacy, we add on-device noise and use secure aggregation so that only the noisy …

Shuffled model of differential privacy in federated learning

A Girgis, D Data, S Diggavi… - International …, 2021 - proceedings.mlr.press
We consider a distributed empirical risk minimization (ERM) optimization problem with
communication efficiency and privacy requirements, motivated by the federated learning …

The fundamental price of secure aggregation in differentially private federated learning

WN Chen, CAC Choo, P Kairouz… - … on Machine Learning, 2022 - proceedings.mlr.press
We consider the problem of training a $ d $ dimensional model with distributed differential
privacy (DP) where secure aggregation (SecAgg) is used to ensure that the server only sees …

Differential privacy in the shuffle model: A survey of separations

A Cheu - arXiv preprint arXiv:2107.11839, 2021 - arxiv.org
Differential privacy is often studied in one of two models. In the central model, a single
analyzer has the responsibility of performing a privacy-preserving computation on data. But …

Privacy amplification via compression: Achieving the optimal privacy-accuracy-communication trade-off in distributed mean estimation

WN Chen, D Song, A Ozgur… - Advances in Neural …, 2024 - proceedings.neurips.cc
Privacy and communication constraints are two major bottlenecks in federated learning (FL)
and analytics (FA). We study the optimal accuracy of mean and frequency estimation …

User-level differentially private learning via correlated sampling

B Ghazi, R Kumar… - Advances in Neural …, 2021 - proceedings.neurips.cc
Most works in learning with differential privacy (DP) have focused on the setting where each
user has a single sample. In this work, we consider the setting where each user holds $ m …

Shuffled model of federated learning: Privacy, accuracy and communication trade-offs

AM Girgis, D Data, S Diggavi, P Kairouz… - IEEE journal on …, 2021 - ieeexplore.ieee.org
We consider a distributed empirical risk minimization (ERM) optimization problem with
communication efficiency and privacy requirements, motivated by the federated learning …

Distributed, private, sparse histograms in the two-server model

J Bell, A Gascon, B Ghazi, R Kumar… - Proceedings of the …, 2022 - dl.acm.org
We consider the computation of sparse,(ε, ϑ)-differentially private~(DP) histograms in the
two-server model of secure multi-party computation~(MPC), which has recently gained …

Differentially private aggregation in the shuffle model: Almost central accuracy in almost a single message

B Ghazi, R Kumar, P Manurangsi… - International …, 2021 - proceedings.mlr.press
The shuffle model of differential privacy has attracted attention in the literature due to it being
a middle ground between the well-studied central and local models. In this work, we study …