Stochastic first-and zeroth-order methods for nonconvex stochastic programming

S Ghadimi, G Lan - SIAM journal on optimization, 2013 - SIAM
In this paper, we introduce a new stochastic approximation type algorithm, namely, the
randomized stochastic gradient (RSG) method, for solving an important class of nonlinear …

Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization

S Ghadimi, G Lan, H Zhang - Mathematical Programming, 2016 - Springer
This paper considers a class of constrained stochastic composite optimization problems
whose objective function is given by the summation of a differentiable (possibly nonconvex) …

Gradient-free methods for deterministic and stochastic nonsmooth nonconvex optimization

T Lin, Z Zheng, M Jordan - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Nonsmooth nonconvex optimization problems broadly emerge in machine learning and
business decision making, whereas two core challenges impede the development of …

Dual averaging method for regularized stochastic learning and online optimization

L Xiao - Advances in Neural Information Processing …, 2009 - proceedings.neurips.cc
We consider regularized stochastic learning and online optimization problems, where the
objective function is the sum of two convex terms: one is the loss function of the learning …

Convex optimization algorithms in medical image reconstruction—in the age of AI

J Xu, F Noo - Physics in Medicine & Biology, 2022 - iopscience.iop.org
The past decade has seen the rapid growth of model based image reconstruction (MBIR)
algorithms, which are often applications or adaptations of convex optimization algorithms …

Fast and robust recursive algorithmsfor separable nonnegative matrix factorization

N Gillis, SA Vavasis - IEEE transactions on pattern analysis …, 2013 - ieeexplore.ieee.org
In this paper, we study the nonnegative matrix factorization problem under the separability
assumption (that is, there exists a cone spanned by a small subset of the columns of the …

Loss minimization and parameter estimation with heavy tails

D Hsu, S Sabato - Journal of Machine Learning Research, 2016 - jmlr.org
This work studies applications and generalizations of a simple estimation technique that
provides exponential concentration under heavy-tailed distributions, assuming only …

Conditional gradient sliding for convex optimization

G Lan, Y Zhou - SIAM Journal on Optimization, 2016 - SIAM
In this paper, we present a new conditional gradient type method for convex optimization by
calling a linear optimization (LO) oracle to minimize a series of linear functions over the …

Differentially private stochastic optimization: New results in convex and non-convex settings

R Bassily, C Guzmán, M Menart - Advances in Neural …, 2021 - proceedings.neurips.cc
We study differentially private stochastic optimization in convex and non-convex settings. For
the convex case, we focus on the family of non-smooth generalized linear losses (GLLs) …

The power of first-order smooth optimization for black-box non-smooth problems

A Gasnikov, A Novitskii, V Novitskii… - arXiv preprint arXiv …, 2022 - arxiv.org
Gradient-free/zeroth-order methods for black-box convex optimization have been
extensively studied in the last decade with the main focus on oracle calls complexity. In this …