The distributed discrete gaussian mechanism for federated learning with secure aggregation

P Kairouz, Z Liu, T Steinke - International Conference on …, 2021 - proceedings.mlr.press
We consider training models on private data that are distributed across user devices. To
ensure privacy, we add on-device noise and use secure aggregation so that only the noisy …

The fundamental price of secure aggregation in differentially private federated learning

WN Chen, CAC Choo, P Kairouz… - … on Machine Learning, 2022 - proceedings.mlr.press
We consider the problem of training a $ d $ dimensional model with distributed differential
privacy (DP) where secure aggregation (SecAgg) is used to ensure that the server only sees …

Topics and techniques in distribution testing: A biased but representative sample

CL Canonne - Foundations and Trends® in Communications …, 2022 - nowpublishers.com
We focus on some specific problems in distribution testing, taking goodness-of-fit as a
running example. In particular, we do not aim to provide a comprehensive summary of all the …

Differential privacy in the shuffle model: A survey of separations

A Cheu - arXiv preprint arXiv:2107.11839, 2021 - arxiv.org
Differential privacy is often studied in one of two models. In the central model, a single
analyzer has the responsibility of performing a privacy-preserving computation on data. But …

Privacy amplification by decentralization

E Cyffers, A Bellet - International Conference on Artificial …, 2022 - proceedings.mlr.press
Analyzing data owned by several parties while achieving a good trade-off between utility
and privacy is a key challenge in federated learning and analytics. In this work, we introduce …

Differentially private aggregation in the shuffle model: Almost central accuracy in almost a single message

B Ghazi, R Kumar, P Manurangsi… - International …, 2021 - proceedings.mlr.press
The shuffle model of differential privacy has attracted attention in the literature due to it being
a middle ground between the well-studied central and local models. In this work, we study …

Locally private k-means in one round

A Chang, B Ghazi, R Kumar… - … on machine learning, 2021 - proceedings.mlr.press
We provide an approximation algorithm for k-means clustering in the\emph {one-
round}(aka\emph {non-interactive}) local model of differential privacy (DP). Our algorithm …

The flajolet-martin sketch itself preserves differential privacy: Private counting with minimal space

A Smith, S Song… - Advances in Neural …, 2020 - proceedings.neurips.cc
We revisit the problem of counting the number of distinct elements $\dist $ in a data stream $
D $, over a domain $[u] $. We propose an $(\epsilon,\delta) $-differentially private algorithm …

Towards sparse federated analytics: Location heatmaps under distributed differential privacy with secure aggregation

E Bagdasaryan, P Kairouz, S Mellem, A Gascón… - arXiv preprint arXiv …, 2021 - arxiv.org
We design a scalable algorithm to privately generate location heatmaps over decentralized
data from millions of user devices. It aims to ensure differential privacy before data becomes …

Inference under information constraints III: Local privacy constraints

J Acharya, CL Canonne, C Freitag… - IEEE Journal on …, 2021 - ieeexplore.ieee.org
We study goodness-of-fit and independence testing of discrete distributions in a setting
where samples are distributed across multiple users. The users wish to preserve the privacy …