Almost tight error bounds on differentially private continual counting

M Henzinger, J Upadhyay, S Upadhyay - … of the 2023 Annual ACM-SIAM …, 2023 - SIAM
Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), 2023SIAM
The first large-scale deployment of private federated learning uses differentially private
counting in the continual release model as a subroutine (Google AI blog titled “Federated
Learning with Formal Differential Privacy Guarantees” on February 28, 2022). For this and
several other applications, it is crucial to use a continual counting mechanism with small
mean squared error. In this case, a concrete (or non-asymptotic) bound on the error is very
relevant to reduce the privacy parameter ε as much as possible, and hence, it is important to …
The first large-scale deployment of private federated learning uses differentially private counting in the continual release model as a subroutine (Google AI blog titled “Federated Learning with Formal Differential Privacy Guarantees” on February 28, 2022). For this and several other applications, it is crucial to use a continual counting mechanism with small mean squared error. In this case, a concrete (or non-asymptotic) bound on the error is very relevant to reduce the privacy parameter ε as much as possible, and hence, it is important to improve upon the constant factor in the error term. The standard mechanism for continual counting, and the one used in the above deployment, is the binary mechanism. We present a novel mechanism and show that its mean squared error is both asymptotically optimal and a factor 10 smaller than the error of the binary mechanism. We also show that the constants in our analysis are almost tight by giving non-asymptotic lower and upper bounds that differ only in the constants of lower-order terms. Our mechanism also has the advantage of taking only constant time per release, while the binary mechanism takes O(log n) time, where n is the total number of released data values. Our algorithm is a matrix mechanism for the counting matrix. We also use our explicit factorization of the counting matrix to give an upper bound on the excess risk of the matrix mechanism-based private learning algorithm of Denisov, McMahan, Rush, Smith, and Thakurta (NeurIPS 2022).
Our lower bound for any continual counting mechanism is the first tight lower bound on continual counting under (ε, δ) -differential privacy and it holds against a non-adaptive adversary. It is achieved using a new lower bound on a certain factorization norm, denoted by γ f (·), in terms of the singular values of the matrix. In particular, we show that for any complex matrix, A ∊ℂm × n,
where ||·|| denotes the Schatten-1 norm. We believe this technique will be useful in proving lower bounds for a larger class of linear queries. To illustrate the power of this technique, we show the first lower bound on the mean squared error for answering parity queries. This bound applies to the non-continual setting and is asymptotically tight.
Society for Industrial and Applied Mathematics
以上显示的是最相近的搜索结果。 查看全部搜索结果