Recent works have shown that stochastic gradient descent (SGD) achieves the fast convergence rates of full-batch gradient descent for over-parameterized models satisfying …
We consider variants of trust-region and adaptive cubic regularization methods for non- convex optimization, in which the Hessian matrix is approximated. Under certain condition …
We present global convergence rates for a line-search method which is based on random first-order models and directions whose quality is ensured only with certain probability. We …
We propose a novel framework for analyzing convergence rates of stochastic optimization algorithms with adaptive step sizes. This framework is based on analyzing properties of an …
In this paper, we present convergence guarantees for a modified trust-region method designed for minimizing objective functions whose value and gradient and Hessian …
We study nonlinear optimization problems with a stochastic objective and deterministic equality and inequality constraints, which emerge in numerous applications including …
We consider solving nonlinear optimization problems with a stochastic objective and deterministic equality constraints. We assume for the objective that its evaluation, gradient …
We introduce a general framework for large-scale model-based derivative-free optimization based on iterative minimization within random subspaces. We present a probabilistic worst …
Y Ha, S Shashaani - IISE Transactions, 2024 - Taylor & Francis
ASTRO-DF is a prominent trust-region method using adaptive sampling for stochastic derivative-free optimization of nonconvex problems. Its salient feature is an easy-to …