S Ghadimi, G Lan - SIAM journal on optimization, 2013 - SIAM
In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear …
S Ghadimi, G Lan - Mathematical Programming, 2016 - Springer
In this paper, we generalize the well-known Nesterov's accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly …
We consider variants of trust-region and adaptive cubic regularization methods for non- convex optimization, in which the Hessian matrix is approximated. Under certain condition …
Abstract An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to …
Abstract An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization and analysed in Cartis, Gould and Toint (Part I, Math Program …
The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective …
JM Kohler, A Lucchi - International Conference on Machine …, 2017 - proceedings.mlr.press
We consider the minimization of non-convex functions that typically arise in machine learning. Specifically, we focus our attention on a variant of trust region methods known as …
C Cartis, NIM Gould, PL Toint - Siam journal on optimization, 2010 - SIAM
It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and …
G Ughi, V Abrol, J Tanner - Optimization and Engineering, 2022 - Springer
We perform a comprehensive study on the performance of derivative free optimization (DFO) algorithms for the generation of targeted black-box adversarial attacks on Deep Neural …