Sharpness, restart and acceleration
V Roulet, A d'Aspremont - Advances in Neural Information …, 2017 - proceedings.neurips.cc
The {\L} ojasiewicz inequality shows that H\" olderian error bounds on the minimum of
convex optimization problems hold almost generically. Here, we clarify results of\citet …
convex optimization problems hold almost generically. Here, we clarify results of\citet …
ADMM without a fixed penalty parameter: Faster convergence with new adaptive penalization
Alternating direction method of multipliers (ADMM) has received tremendous interest for
solving numerous problems in machine learning, statistics and signal processing. However …
solving numerous problems in machine learning, statistics and signal processing. However …
Adaptive restart of accelerated gradient methods under local quadratic growth condition
By analyzing accelerated proximal gradient methods under a local quadratic growth
condition, we show that restarting these algorithms at any frequency gives a globally linearly …
condition, we show that restarting these algorithms at any frequency gives a globally linearly …
Stochastic convex optimization: Faster local growth implies faster global convergence
In this paper, a new theory is developed for first-order stochastic convex optimization,
showing that the global convergence rate is sufficiently quantified by a local growth rate of …
showing that the global convergence rate is sufficiently quantified by a local growth rate of …
Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions
In this paper we study the convergence properties of a Nesterov's family of inertial schemes
which is a specific case of inertial Gradient Descent algorithm in the context of a smooth …
which is a specific case of inertial Gradient Descent algorithm in the context of a smooth …
Restarting the accelerated coordinate descent method with a rough strong convexity estimate
We propose new restarting strategies for the accelerated coordinate descent method. Our
main contribution is to show that for a well chosen sequence of restarting times, the restarted …
main contribution is to show that for a well chosen sequence of restarting times, the restarted …
Adaptive svrg methods under error bound conditions with unknown growth parameter
Error bound, an inherent property of an optimization problem, has recently revived in the
development of algorithms with improved global convergence without strong convexity. The …
development of algorithms with improved global convergence without strong convexity. The …
Radial duality part II: applications and algorithms
B Grimmer - Mathematical Programming, 2024 - Springer
The first part of this work established the foundations of a radial duality between
nonnegative optimization problems, inspired by the work of Renegar (SIAM J Optim 26 (4) …
nonnegative optimization problems, inspired by the work of Renegar (SIAM J Optim 26 (4) …
Frank-Wolfe method is automatically adaptive to error bound condition
Error bound condition has recently gained revived interest in optimization. It has been
leveraged to derive faster convergence for many popular algorithms, including subgradient …
leveraged to derive faster convergence for many popular algorithms, including subgradient …
Proximal gradient algorithm with momentum and flexible parameter restart for nonconvex optimization
Various types of parameter restart schemes have been proposed for accelerated gradient
algorithms to facilitate their practical convergence in convex optimization. However, the …
algorithms to facilitate their practical convergence in convex optimization. However, the …