We consider solving nonlinear optimization problems with a stochastic objective and deterministic equality constraints. We assume for the objective that its evaluation, gradient …
S Sun, J Nocedal - Mathematical Programming, 2023 - Springer
Classical trust region methods were designed to solve problems in which function and gradient information are exact. This paper considers the case when there are errors (or …
V Patel, AS Berahas - SIAM Journal on Mathematics of Data Science, 2024 - SIAM
Gradient descent (GD) is a collection of continuous optimization methods that have achieved immeasurable success in practice. Owing to data science applications, GD with diminishing …
FE Curtis, K Scheinberg - IEEE Signal Processing Magazine, 2020 - ieeexplore.ieee.org
Optimization lies at the heart of machine learning (ML) and signal processing (SP). Contemporary approaches based on the stochastic gradient (SG) method are nonadaptive …
In many important machine learning applications, the standard assumption of having a globally Lipschitz continuous gradient may fail to hold. This paper delves into a more …
We present a stochastic extension of the mesh adaptive direct search (MADS) algorithm originally developed for deterministic blackbox optimization. The algorithm, called StoMADS …
We propose a trust-region stochastic sequential quadratic programming algorithm (TR- StoSQP) to solve nonlinear optimization problems with stochastic objectives and …
Classical trust region methods were designed to solve problems in which function and gradient information are exact. This paper considers the case when there are bounded …
L Liu, X Liu, CJ Hsieh, D Tao - IEEE Transactions on Neural …, 2023 - ieeexplore.ieee.org
Trust region (TR) and adaptive regularization using cubics (ARC) have proven to have some very appealing theoretical properties for nonconvex optimization by concurrently computing …