End-to-end constrained optimization learning: A survey

J Kotary, F Fioretto, P Van Hentenryck… - arXiv preprint arXiv …, 2021 - arxiv.org
This paper surveys the recent attempts at leveraging machine learning to solve constrained
optimization problems. It focuses on surveying the work on integrating combinatorial solvers …

Only train once: A one-shot neural network training and pruning framework

T Chen, B Ji, T Ding, B Fang, G Wang… - Advances in …, 2021 - proceedings.neurips.cc
Structured pruning is a commonly used technique in deploying deep neural networks
(DNNs) onto resource-constrained devices. However, the existing pruning methods are …

Fixed point strategies in data science

PL Combettes, JC Pesquet - IEEE Transactions on Signal …, 2021 - ieeexplore.ieee.org
The goal of this article is to promote the use of fixed point strategies in data science by
showing that they provide a simplifying and unifying framework to model, analyze, and solve …

Safe screening rules for l0-regression from perspective relaxations

A Atamturk, A Gómez - International conference on machine …, 2020 - proceedings.mlr.press
We give safe screening rules to eliminate variables from regression with $\ell_0 $
regularization or cardinality constraint. These rules are based on guarantees that a feature …

Block coordinate regularization by denoising

Y Sun, J Liu, U Kamilov - Advances in Neural Information …, 2019 - proceedings.neurips.cc
We consider the problem of estimating a vector from its noisy measurements using a prior
specified only through a denoising function. Recent work on plug-and-play priors (PnP) and …

Celer: a fast solver for the lasso with dual extrapolation

M Massias, A Gramfort… - … Conference on Machine …, 2018 - proceedings.mlr.press
Convex sparsity-inducing regularizations are ubiquitous in high-dimensional machine
learning, but solving the resulting optimization problems can be slow. To accelerate solvers …

Learning step sizes for unfolded sparse coding

P Ablin, T Moreau, M Massias… - Advances in Neural …, 2019 - proceedings.neurips.cc
Sparse coding is typically solved by iterative optimization techniques, such as the Iterative
Shrinkage-Thresholding Algorithm (ISTA). Unfolding and learning weights of ISTA using …

[PDF][PDF] Doubly Sparse Asynchronous Learning

R Bao, X Wu, W Xian, H Huang - The 31st International Joint Conference …, 2022 - par.nsf.gov
Parallel optimization has become popular for largescale learning in the past decades.
However, existing methods suffer from huge computational cost, memory usage, and …

An accelerated doubly stochastic gradient method with faster explicit model identification

R Bao, B Gu, H Huang - Proceedings of the 31st ACM International …, 2022 - dl.acm.org
Sparsity regularized loss minimization problems play an important role in various fields
including machine learning, data mining, and modern statistics. Proximal gradient descent …

Hybrid ISTA: Unfolding ISTA with convergence guarantees using free-form deep neural networks

Z Zheng, W Dai, D Xue, C Li, J Zou… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
It is promising to solve linear inverse problems by unfolding iterative algorithms (eg, iterative
shrinkage thresholding algorithm (ISTA)) as deep neural networks (DNNs) with learnable …