Differential privacy in the shuffle model: A survey of separations

A Cheu - arXiv preprint arXiv:2107.11839, 2021 - arxiv.org
Differential privacy is often studied in one of two models. In the central model, a single
analyzer has the responsibility of performing a privacy-preserving computation on data. But …

Private non-convex federated learning without a trusted server

A Lowy, A Ghafelebashi… - … Conference on Artificial …, 2023 - proceedings.mlr.press
We study federated learning (FL) with non-convex loss functions and data from people who
do not trust the server or other silos. In this setting, each silo (eg hospital) must protect the …

On differentially private federated linear contextual bandits

X Zhou, SR Chowdhury - arXiv preprint arXiv:2302.13945, 2023 - arxiv.org
We consider cross-silo federated linear contextual bandit (LCB) problem under differential
privacy, where multiple silos (agents) interact with the local users and communicate via a …

Shuffle private linear contextual bandits

SR Chowdhury, X Zhou - arXiv preprint arXiv:2202.05567, 2022 - arxiv.org
Differential privacy (DP) has been recently introduced to linear contextual bandits to formally
address the privacy concerns in its associated personalized services to participating users …

Differentially private stochastic linear bandits:(almost) for free

O Hanna, AM Girgis, C Fragouli… - IEEE Journal on …, 2024 - ieeexplore.ieee.org
In this paper, we propose differentially private algorithms for the problem of stochastic linear
bandits in the central, local and shuffled models. In the central model, we achieve almost the …

Distributed differential privacy in multi-armed bandits

SR Chowdhury, X Zhou - arXiv preprint arXiv:2206.05772, 2022 - arxiv.org
We consider the standard $ K $-armed bandit problem under a distributed trust model of
differential privacy (DP), which enables to guarantee privacy without a trustworthy server …

Differentially private linear bandits with partial distributed feedback

F Li, X Zhou, B Ji - … Symposium on Modeling and Optimization in …, 2022 - ieeexplore.ieee.org
In this paper, we study the problem of global reward maximization with only partial
distributed feedback. This problem is motivated by several real-world applications (eg …

Private federated learning without a trusted server: Optimal algorithms for convex losses

A Lowy, M Razaviyayn - arXiv preprint arXiv:2106.09779, 2021 - arxiv.org
This paper studies federated learning (FL)--especially cross-silo FL--with data from people
who do not trust the server or other silos. In this setting, each silo (eg hospital) has data from …

Private stochastic optimization with large worst-case lipschitz parameter: Optimal rates for (non-smooth) convex losses and extension to non-convex losses

A Lowy, M Razaviyayn - International Conference on …, 2023 - proceedings.mlr.press
We study differentially private (DP) stochastic optimization (SO) with loss functions whose
worst-case Lipschitz parameter over all data points may be extremely large. To date, the vast …

Multi-message shuffled privacy in federated learning

AM Girgis, S Diggavi - IEEE Journal on Selected Areas in …, 2024 - ieeexplore.ieee.org
We study the distributed mean estimation (DME) problem under privacy and communication
constraints in the local differential privacy (LDP) and multi-message shuffled (MMS) privacy …