The proximal gradient algorithm for minimizing the sum of a smooth and nonsmooth convex function often converges linearly even without strong convexity. One common reason is that …
Optimization algorithms can see their local convergence rates deteriorate when the Hessian at the optimum is singular. These singularities are inescapable when the optima are non …
T Yang, Q Lin - Journal of Machine Learning Research, 2018 - jmlr.org
In this paper, we study the efficiency of a Restarted SubGradient (RSG) method that periodically restarts the standard subgradient method (SG). We show that, when applied to a …
We present a framework for analyzing convergence and local rates of convergence of a class of descent algorithms, assuming the objective function is weakly convex. The …
The paper proposes and justifies a new algorithm of the proximal Newton type to solve a broad class of nonsmooth composite convex optimization problems without strong convexity …
D Drusvyatskiy - arXiv preprint arXiv:1712.06038, 2017 - arxiv.org
In this short survey, I revisit the role of the proximal point method in large scale optimization. I focus on three recent examples: a proximally guided subgradient method for weakly convex …
This paper proposes and justifies two globally convergent Newton-type methods to solve unconstrained and constrained problems of nonsmooth optimization by using tools of …
We consider optimization algorithms that successively minimize simple Taylor-like models of the objective function. Methods of Gauss–Newton type for minimizing the composition of a …
The paper is devoted to the study, characterizations, and applications of variational convexity of functions, the property that has been recently introduced by Rockafellar together …