作者
Shixiang Chen, Alfredo Garcia, Shahin Shahrampour
发表日期
2021/2/2
期刊
IEEE Transactions on Automatic Control
卷号
67
期号
2
页码范围
662-675
出版商
IEEE
简介
The stochastic subgradient method is a widely used algorithm for solving large-scale optimization problems arising in machine learning. Often, these problems are neither smooth nor convex. Recently, Davis et al. , 2018 characterized the convergence of the stochastic subgradient method for the weakly convex case, which encompasses many important applications (e.g., robust phase retrieval, blind deconvolution, biconvex compressive sensing, and dictionary learning). In practice, distributed implementations of the projected stochastic subgradient method (stoDPSM) are used to speed up risk minimization. In this article, we propose a distributed implementation of the stochastic subgradient method with a theoretical guarantee. Specifically, we show the global convergence of stoDPSM using the Moreau envelope stationarity measure. Furthermore, under a so-called sharpness condition, we show that deterministic …
引用总数
学术搜索中的文章