Recent works have shown that stochastic gradient descent (SGD) achieves the fast convergence rates of full-batch gradient descent for over-parameterized models satisfying …
For deterministic optimization, line search methods augment algorithms by providing stability and improved efficiency. Here we adapt a classical backtracking Armijo line search to the …
In this paper, we develop convergence analysis of a modified line search method for objective functions whose value is computed with noise and whose gradient estimates are …
Do you know the difference between an optimist and a pessimist? The former believes we live in the best possible world, and the latter is afraid that the former might be right.… In that …
In this paper, we present convergence guarantees for a modified trust-region method designed for minimizing objective functions whose value and gradient and Hessian …
We study nonlinear optimization problems with a stochastic objective and deterministic equality and inequality constraints, which emerge in numerous applications including …
We consider solving nonlinear optimization problems with a stochastic objective and deterministic equality constraints. We assume for the objective that its evaluation, gradient …
S Sun, J Nocedal - Mathematical Programming, 2023 - Springer
Classical trust region methods were designed to solve problems in which function and gradient information are exact. This paper considers the case when there are errors (or …
Y Ha, S Shashaani - IISE Transactions, 2024 - Taylor & Francis
ASTRO-DF is a prominent trust-region method using adaptive sampling for stochastic derivative-free optimization of nonconvex problems. Its salient feature is an easy-to …