T Rotaru, F Glineur, P Patrinos - arXiv preprint arXiv:2406.17506, 2024 - arxiv.org
We consider gradient descent with constant stepsizes and derive exact worst-case convergence rates on the minimum gradient norm of the iterates. Our analysis covers all …
D Zhou, S Ma, J Yang - arXiv preprint arXiv:2401.08024, 2024 - arxiv.org
In this paper, we propose AdaBB, an adaptive gradient method based on the Barzilai- Borwein stepsize. The algorithm is line-search-free and parameter-free, and essentially …
We conduct a comprehensive investigation into the dynamics of gradient descent using large-order constant step-sizes in the context of quadratic regression models. Within this …
This study evaluates the impact of step size selection on Jacobian-based inverse kinematics (IK) for robotic manipulators. Although traditional constant step size approaches offer …
Z Zhang, R Jiang - arXiv preprint arXiv:2410.12395, 2024 - arxiv.org
This work considers stepsize schedules for gradient descent on smooth convex objectives. We extend the existing literature and propose a unified technique for constructing stepsizes …
We develop new sub-optimality bounds for gradient descent (GD) that depend on the conditioning of the objective along the path of optimization, rather than on global, worst-case …
We introduce a machine-learning framework to learn the hyperparameter sequence of first- order methods (eg, the step sizes in gradient descent) to quickly solve parametric convex …
B Wang, S Ma, J Yang, D Zhou - arXiv preprint arXiv:2410.08890, 2024 - arxiv.org
In this paper, we focus on the relaxed proximal point algorithm (RPPA) for solving convex (possibly nonsmooth) optimization problems. We conduct a comprehensive study on three …
This work investigates stepsize-based acceleration of gradient descent with {\em anytime} convergence guarantees. For smooth (non-strongly) convex optimization, we propose a …