S Gopi, YT Lee, D Liu - Conference on Learning Theory, 2022 - proceedings.mlr.press
In this paper, we study the private optimization problems for non-smooth convex functions $ F (x)=\mathbb {E} _i f_i (x) $ on $\mathbb {R}^ d $. We show that modifying the exponential …
J Altschuler, K Talwar - Advances in Neural Information …, 2022 - proceedings.neurips.cc
A central issue in machine learning is how to train models on sensitive user data. Industry has widely adopted a simple algorithm: Stochastic Gradient Descent with noise (aka …
We analytically investigate how over-parameterization of models in randomized machine learning algorithms impacts the information leakage about their training data. Specifically …
Differentially private (stochastic) gradient descent is the workhorse of DP private machine learning in both the convex and non-convex settings. Without privacy constraints, second …
J Ye, R Shokri - Advances in Neural Information Processing …, 2022 - proceedings.neurips.cc
Prior work on differential privacy analysis of randomized SGD algorithms relies on composition theorems, where the implicit (unrealistic) assumption is that the internal state of …
Differential Privacy (DP) ensures that training a machine learning model does not leak private data. However, the cost of DP is lower model accuracy or higher sample complexity …
We propose a new framework for differentially private optimization of convex functions which are Lipschitz in an arbitrary norm||·|| x. Our algorithms are based on a regularized …
We consider the problem of minimizing a non-convex objective while preserving the privacy of the examples in the training data. Building upon the previous variance-reduced algorithm …
In the context of first-order algorithms subject to random gradient noise, we study the trade- offs between the convergence rate (which quantifies how fast the initial conditions are …