When two different parties use the same learning rule on their own data, how can we test whether the distributions of the two outcomes are similar? In this paper, we study the …
We propose and analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints. Rather than guaranteeing only the privacy of individual …
Z Xu, M Collins, Y Wang, L Panait… - Proceedings of the …, 2023 - openaccess.thecvf.com
Small on-device models have been successfully trained with user-level differential privacy (DP) for next word prediction and image classification tasks in the past. However, existing …
Previous work on user-level differential privacy (DP)[Ghazi et al. NeurIPS 2021, Bun et al. STOC 2023] obtained generic algorithms that work for various learning tasks. However, their …
This paper studies federated linear contextual bandits under the notion of user-level differential privacy (DP). We first introduce a unified federated bandits framework that can …
We consider the computation of sparse,(ε, ϑ)-differentially private~(DP) histograms in the two-server model of secure multi-party computation~(MPC), which has recently gained …
R Impagliazzo, R Lei, T Pitassi, J Sorrell - Proceedings of the 54th annual …, 2022 - dl.acm.org
We introduce the notion of a reproducible algorithm in the context of learning. A reproducible learning algorithm is resilient to variations in its samples—with high probability, it returns the …
N Alon, M Bun, R Livni, M Malliaris… - ACM Journal of the ACM …, 2022 - dl.acm.org
Let H be a binary-labeled concept class. We prove that H can be PAC learned by an (approximate) differentially private algorithm if and only if it has a finite Littlestone dimension …
P Dixon, A Pavan, J Vander Woude… - Advances in …, 2024 - proceedings.neurips.cc
We investigate replicable learning algorithms. Informally a learning algorithm is replicable if the algorithm outputs the same canonical hypothesis over multiple runs with high probability …