Searching for optimal per-coordinate step-sizes with multidimensional backtracking

F Kunstner, V Sanches Portella… - Advances in Neural …, 2023 - proceedings.neurips.cc
The backtracking line-search is an effective technique to automatically tune the step-size in
smooth optimization. It guarantees similar performance to using the theoretically optimal …

Fast, blind, and accurate: Tuning-free sparse regression with global linear convergence

CM Verdun, O Melnyk, F Krahmer… - The Thirty Seventh …, 2024 - proceedings.mlr.press
Many algorithms for high-dimensional regression problems require the calibration of
regularization hyperparameters. This, in turn, often requires the knowledge of the unknown …

Variance reduced training with stratified sampling for forecasting models

Y Lu, Y Park, L Chen, Y Wang… - International …, 2021 - proceedings.mlr.press
In large-scale time series forecasting, one often encounters the situation where the temporal
patterns of time series, while drifting over time, differ from one another in the same dataset …

On plug-and-play regularization using linear denoisers

RG Gavaskar, CD Athalye… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
In plug-and-play (PnP) regularization, the knowledge of the forward model is combined with
a powerful denoiser to obtain state-of-the-art image reconstructions. This is typically done by …

Proximal quasi-Newton method for composite optimization over the Stiefel manifold

Q Wang, WH Yang - Journal of Scientific Computing, 2023 - Springer
In this paper, we consider the composite optimization problems over the Stiefel manifold. A
successful method to solve this class of problems is the proximal gradient method proposed …

Towards constituting mathematical structures for learning to optimize

J Liu, X Chen, Z Wang, W Yin… - … Conference on Machine …, 2023 - proceedings.mlr.press
Abstract Learning to Optimize (L2O), a technique that utilizes machine learning to learn an
optimization algorithm automatically from data, has gained arising attention in recent years …

A harmonic framework for stepsize selection in gradient methods

G Ferrandi, ME Hochstenbach, N Krejić - Computational Optimization and …, 2023 - Springer
We study the use of inverse harmonic Rayleigh quotients with target for the stepsize
selection in gradient methods for nonlinear unconstrained optimization problems. This not …

Proximal diagonal Newton methods for composite optimization problems

S Yagishita, S Nakayama - arXiv preprint arXiv:2310.06789, 2023 - arxiv.org
This paper proposes new proximal Newton-type methods with a diagonal metric for solving
composite optimization problems whose objective function is the sum of a twice continuously …

Deep-plug-and-play proximal gauss-newton method with applications to nonlinear, ill-posed inverse problems

F Colibazzi, D Lazzaro, S Morigi… - Inverse Problems and …, 2023 - aimsciences.org
In this paper we propose a proximal Gauss-Newton method for the penalized nonlinear least
squares optimization problem arising from regularization of ill-posed nonlinear inverse …

Stochastic Variable Metric Proximal Gradient with variance reduction for non-convex composite optimization

G Fort, E Moulines - Statistics and Computing, 2023 - Springer
This paper introduces a novel algorithm, the Perturbed Proximal Preconditioned SPIDER
algorithm (3P-SPIDER), designed to solve finite sum non-convex composite optimization. It …