Recent advances in trust region algorithms

Y Yuan - Mathematical Programming, 2015 - Springer
Trust region methods are a class of numerical methods for optimization. Unlike line search
type methods where a line search is carried out in each iteration, trust region methods …

Stochastic first-and zeroth-order methods for nonconvex stochastic programming

S Ghadimi, G Lan - SIAM journal on optimization, 2013 - SIAM
In this paper, we introduce a new stochastic approximation type algorithm, namely, the
randomized stochastic gradient (RSG) method, for solving an important class of nonlinear …

Accelerated gradient methods for nonconvex nonlinear and stochastic programming

S Ghadimi, G Lan - Mathematical Programming, 2016 - Springer
In this paper, we generalize the well-known Nesterov's accelerated gradient (AG) method,
originally designed for convex smooth optimization, to solve nonconvex and possibly …

Newton-type methods for non-convex optimization under inexact Hessian information

P Xu, F Roosta, MW Mahoney - Mathematical Programming, 2020 - Springer
We consider variants of trust-region and adaptive cubic regularization methods for non-
convex optimization, in which the Hessian matrix is approximated. Under certain condition …

Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results

C Cartis, NIM Gould, PL Toint - Mathematical Programming, 2011 - Springer
Abstract An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for
unconstrained optimization, generalizing at the same time an unpublished method due to …

Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function-and derivative-evaluation complexity

C Cartis, NIM Gould, PL Toint - Mathematical programming, 2011 - Springer
Abstract An Adaptive Regularisation framework using Cubics (ARC) was proposed for
unconstrained optimization and analysed in Cartis, Gould and Toint (Part I, Math Program …

Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

EG Birgin, JL Gardenghi, JM Martínez… - Mathematical …, 2017 - Springer
The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained
optimization is considered. It is shown that, if one is willing to use derivatives of the objective …

Sub-sampled cubic regularization for non-convex optimization

JM Kohler, A Lucchi - International Conference on Machine …, 2017 - proceedings.mlr.press
We consider the minimization of non-convex functions that typically arise in machine
learning. Specifically, we focus our attention on a variant of trust region methods known as …

On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems

C Cartis, NIM Gould, PL Toint - Siam journal on optimization, 2010 - SIAM
It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex
optimization under standard assumptions may both require a number of iterations and …

An empirical study of derivative-free-optimization algorithms for targeted black-box attacks in deep neural networks

G Ughi, V Abrol, J Tanner - Optimization and Engineering, 2022 - Springer
We perform a comprehensive study on the performance of derivative free optimization (DFO)
algorithms for the generation of targeted black-box adversarial attacks on Deep Neural …