Acceleration by stepsize hedging: Silver Stepsize Schedule for smooth convex optimization

JM Altschuler, PA Parrilo - Mathematical Programming, 2024 - Springer
We provide a concise, self-contained proof that the Silver Stepsize Schedule proposed in
our companion paper directly applies to smooth (non-strongly) convex optimization …

Exact worst-case convergence rates of gradient descent: a complete analysis for all constant stepsizes over nonconvex and convex functions

T Rotaru, F Glineur, P Patrinos - arXiv preprint arXiv:2406.17506, 2024 - arxiv.org
We consider gradient descent with constant stepsizes and derive exact worst-case
convergence rates on the minimum gradient norm of the iterates. Our analysis covers all …

AdaBB: Adaptive Barzilai-Borwein method for convex optimization

D Zhou, S Ma, J Yang - arXiv preprint arXiv:2401.08024, 2024 - arxiv.org
In this paper, we propose AdaBB, an adaptive gradient method based on the Barzilai-
Borwein stepsize. The algorithm is line-search-free and parameter-free, and essentially …

From stability to chaos: Analyzing gradient descent dynamics in quadratic regression

X Chen, K Balasubramanian, P Ghosal… - arXiv preprint arXiv …, 2023 - arxiv.org
We conduct a comprehensive investigation into the dynamics of gradient descent using
large-order constant step-sizes in the context of quadratic regression models. Within this …

Variable step sizes for iterative Jacobian-based inverse kinematics of robotic manipulators

J Colan, A Davila, Y Hasegawa - IEEE Access, 2024 - ieeexplore.ieee.org
This study evaluates the impact of step size selection on Jacobian-based inverse kinematics
(IK) for robotic manipulators. Although traditional constant step size approaches offer …

Accelerated gradient descent by concatenation of stepsize schedules

Z Zhang, R Jiang - arXiv preprint arXiv:2410.12395, 2024 - arxiv.org
This work considers stepsize schedules for gradient descent on smooth convex objectives.
We extend the existing literature and propose a unified technique for constructing stepsizes …

Directional Smoothness and Gradient Methods: Convergence and Adaptivity

A Mishkin, A Khaled, Y Wang, A Defazio… - arXiv preprint arXiv …, 2024 - arxiv.org
We develop new sub-optimality bounds for gradient descent (GD) that depend on the
conditioning of the objective along the path of optimization, rather than on global, worst-case …

Learning Algorithm Hyperparameters for Fast Parametric Convex Optimization

R Sambharya, B Stellato - arXiv preprint arXiv:2411.15717, 2024 - arxiv.org
We introduce a machine-learning framework to learn the hyperparameter sequence of first-
order methods (eg, the step sizes in gradient descent) to quickly solve parametric convex …

Relaxed proximal point algorithm: Tight complexity bounds and acceleration without momentum

B Wang, S Ma, J Yang, D Zhou - arXiv preprint arXiv:2410.08890, 2024 - arxiv.org
In this paper, we focus on the relaxed proximal point algorithm (RPPA) for solving convex
(possibly nonsmooth) optimization problems. We conduct a comprehensive study on three …

Anytime Acceleration of Gradient Descent

Z Zhang, JD Lee, SS Du, Y Chen - arXiv preprint arXiv:2411.17668, 2024 - arxiv.org
This work investigates stepsize-based acceleration of gradient descent with {\em anytime}
convergence guarantees. For smooth (non-strongly) convex optimization, we propose a …