Many algorithms for high-dimensional regression problems require the calibration of regularization hyperparameters. This, in turn, often requires the knowledge of the unknown …
Y Lu, Y Park, L Chen, Y Wang… - International …, 2021 - proceedings.mlr.press
In large-scale time series forecasting, one often encounters the situation where the temporal patterns of time series, while drifting over time, differ from one another in the same dataset …
In plug-and-play (PnP) regularization, the knowledge of the forward model is combined with a powerful denoiser to obtain state-of-the-art image reconstructions. This is typically done by …
Q Wang, WH Yang - Journal of Scientific Computing, 2023 - Springer
In this paper, we consider the composite optimization problems over the Stiefel manifold. A successful method to solve this class of problems is the proximal gradient method proposed …
Abstract Learning to Optimize (L2O), a technique that utilizes machine learning to learn an optimization algorithm automatically from data, has gained arising attention in recent years …
We study the use of inverse harmonic Rayleigh quotients with target for the stepsize selection in gradient methods for nonlinear unconstrained optimization problems. This not …
This paper proposes new proximal Newton-type methods with a diagonal metric for solving composite optimization problems whose objective function is the sum of a twice continuously …
F Colibazzi, D Lazzaro, S Morigi… - Inverse Problems and …, 2023 - aimsciences.org
In this paper we propose a proximal Gauss-Newton method for the penalized nonlinear least squares optimization problem arising from regularization of ill-posed nonlinear inverse …
This paper introduces a novel algorithm, the Perturbed Proximal Preconditioned SPIDER algorithm (3P-SPIDER), designed to solve finite sum non-convex composite optimization. It …