This book, as the title suggests, is about first-order methods, namely, methods that exploit information on values and gradients/subgradients (but not Hessians) of the functions …
We consider derivative-free algorithms for stochastic and nonstochastic convex optimization problems that use only function values rather than gradients. Focusing on nonasymptotic …
This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) …
An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities. The interplay between …
To make decisions optimally is a basic human desire. Whenever the situation and the objectives can be described quantitatively, this desire can be satisfied, to some extent, by …
A Beck, M Teboulle - Operations Research Letters, 2003 - Elsevier
The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an efficiency estimate that is mildly …
W Krichene, A Bayen… - Advances in neural …, 2015 - proceedings.neurips.cc
We study accelerated mirror descent dynamics in continuous and discrete time. Combining the original continuous-time motivation of mirror descent with a recent ODE interpretation of …
Y Nesterov - Mathematical programming, 2009 - Springer
In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primal-dual since they …
4 Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey Page 1 4 Incremental Gradient, Subgradient, and Proximal Methods for Convex …