作者
Guannan Qu, Na Li
发表日期
2019/8/26
期刊
IEEE Transactions on Automatic Control
卷号
65
期号
6
页码范围
2566-2581
出版商
IEEE
简介
This paper considers the distributed optimization problem over a network, where the objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. We develop an accelerated distributed Nesterov gradient descent method. When the objective function is convex and Lsmooth, we show that it achieves a O( 1/ t 1 .4 -ϵ) convergence rate for all ϵ ∈ (0, 1.4). We also show the convergence rate can be improved to O(1/ t 2) if the objective function is a composition of a linear map and a strongly convex and smooth function. When the objective function is μ-strongly convex and L-smooth, we show that it achieves a linear convergence rate of O([1 - C(μ/L )5/7] t ), whereLμ is the condition number of the objective, and C > 0 is some constant that does not depend on L/μ .
引用总数
20172018201920202021202220232024212294243455331
学术搜索中的文章