[PDF][PDF] Private Stochastic Convex Optimization and Sparse Learning with Heavy-tailed Data Revisited.

Y Tao, Y Wu, X Cheng, Di Wang 0015 - IJCAI, 2022 - youmingtao.github.io
IJCAI, 2022youmingtao.github.io
In this paper, we revisit the problem of Differentially Private Stochastic Convex Optimization
(DPSCO) with heavy-tailed data, where the gradient of the loss function has bounded
moments. Instead of the case where the loss function is Lipschitz or each coordinate of the
gradient has bounded second moment studied previously, we consider a relaxed scenario
where each coordinate of the gradient only has bounded (1+ v)-th moment with some v∈(0,
1]. Firstly, we start from the one dimensional private mean estimation for heavy-tailed …
Abstract
In this paper, we revisit the problem of Differentially Private Stochastic Convex Optimization (DPSCO) with heavy-tailed data, where the gradient of the loss function has bounded moments. Instead of the case where the loss function is Lipschitz or each coordinate of the gradient has bounded second moment studied previously, we consider a relaxed scenario where each coordinate of the gradient only has bounded (1+ v)-th moment with some v∈(0, 1]. Firstly, we start from the one dimensional private mean estimation for heavy-tailed distributions. We propose a novel robust and private mean estimator which is optimal. Based on its idea, we then extend to the general d-dimensional space and study DP-SCO with general convex and strongly convex loss functions. We also provide lower bounds for these two classes of loss under our setting and show that our upper bounds are optimal up to a factor of O (Poly (d)). To address the high dimensionality issue, we also study DPSCO with heavy-tailed gradient under some sparsity constraint (DP sparse learning). We propose a new method and show it is also optimal up to a factor of O (s∗), where s∗ is the underlying sparsity of the constraint.
youmingtao.github.io
以上显示的是最相近的搜索结果。 查看全部搜索结果