Derivative-free optimization methods

J Larson, M Menickelly, SM Wild - Acta Numerica, 2019 - cambridge.org
In many optimization problems arising from scientific, engineering and artificial intelligence
applications, objective and constraint functions are available only as the output of a black …

Painless stochastic gradient: Interpolation, line-search, and convergence rates

S Vaswani, A Mishkin, I Laradji… - Advances in neural …, 2019 - proceedings.neurips.cc
Recent works have shown that stochastic gradient descent (SGD) achieves the fast
convergence rates of full-batch gradient descent for over-parameterized models satisfying …

Newton-type methods for non-convex optimization under inexact Hessian information

P Xu, F Roosta, MW Mahoney - Mathematical Programming, 2020 - Springer
We consider variants of trust-region and adaptive cubic regularization methods for non-
convex optimization, in which the Hessian matrix is approximated. Under certain condition …

Global convergence rate analysis of unconstrained optimization methods based on probabilistic models

C Cartis, K Scheinberg - Mathematical Programming, 2018 - Springer
We present global convergence rates for a line-search method which is based on random
first-order models and directions whose quality is ensured only with certain probability. We …

Convergence rate analysis of a stochastic trust-region method via supermartingales

J Blanchet, C Cartis, M Menickelly… - INFORMS journal on …, 2019 - pubsonline.informs.org
We propose a novel framework for analyzing convergence rates of stochastic optimization
algorithms with adaptive step sizes. This framework is based on analyzing properties of an …

First-and second-order high probability complexity bounds for trust-region methods with noisy oracles

L Cao, AS Berahas, K Scheinberg - Mathematical Programming, 2024 - Springer
In this paper, we present convergence guarantees for a modified trust-region method
designed for minimizing objective functions whose value and gradient and Hessian …

Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming

S Na, M Anitescu, M Kolar - Mathematical Programming, 2023 - Springer
We study nonlinear optimization problems with a stochastic objective and deterministic
equality and inequality constraints, which emerge in numerous applications including …

An adaptive stochastic sequential quadratic programming with differentiable exact augmented lagrangians

S Na, M Anitescu, M Kolar - Mathematical Programming, 2023 - Springer
We consider solving nonlinear optimization problems with a stochastic objective and
deterministic equality constraints. We assume for the objective that its evaluation, gradient …

Scalable subspace methods for derivative-free nonlinear least-squares optimization

C Cartis, L Roberts - Mathematical Programming, 2023 - Springer
We introduce a general framework for large-scale model-based derivative-free optimization
based on iterative minimization within random subspaces. We present a probabilistic worst …

Iteration complexity and finite-time efficiency of adaptive sampling trust-region methods for stochastic derivative-free optimization

Y Ha, S Shashaani - IISE Transactions, 2024 - Taylor & Francis
ASTRO-DF is a prominent trust-region method using adaptive sampling for stochastic
derivative-free optimization of nonconvex problems. Its salient feature is an easy-to …