Adaptive accelerated gradient converging method under h\"{o} lderian error bound condition

M Liu, T Yang - Advances in Neural Information Processing …, 2017 - proceedings.neurips.cc
Advances in Neural Information Processing Systems, 2017proceedings.neurips.cc
Recent studies have shown that proximal gradient (PG) method and accelerated gradient
method (APG) with restarting can enjoy a linear convergence under a weaker condition than
strong convexity, namely a quadratic growth condition (QGC). However, the faster
convergence of restarting APG method relies on the potentially unknown constant in QGC to
appropriately restart APG, which restricts its applicability. We address this issue by
developing a novel adaptive gradient converging methods, ie, leveraging the magnitude of …
Abstract
Recent studies have shown that proximal gradient (PG) method and accelerated gradient method (APG) with restarting can enjoy a linear convergence under a weaker condition than strong convexity, namely a quadratic growth condition (QGC). However, the faster convergence of restarting APG method relies on the potentially unknown constant in QGC to appropriately restart APG, which restricts its applicability. We address this issue by developing a novel adaptive gradient converging methods, ie, leveraging the magnitude of proximal gradient as a criterion for restart and termination. Our analysis extends to a much more general condition beyond the QGC, namely the H\"{o} lderian error bound (HEB) condition.{\it The key technique} for our development is a novel synthesis of {\it adaptive regularization and a conditional restarting scheme}, which extends previous work focusing on strongly convex problems to a much broader family of problems. Furthermore, we demonstrate that our results have important implication and applications in machine learning:(i) if the objective function is coercive and semi-algebraic, PG's convergence speed is essentially , where is the total number of iterations;(ii) if the objective function consists of an , , , or huber norm regularization and a convex smooth piecewise quadratic loss (eg, square loss, squared hinge loss and huber loss), the proposed algorithm is parameter-free and enjoys a {\it faster linear convergence} than PG without any other assumptions (eg, restricted eigen-value condition). It is notable that our linear convergence results for the aforementioned problems are global instead of local. To the best of our knowledge, these improved results are first shown in this work.
proceedings.neurips.cc
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References