Modern regularization methods for inverse problems

M Benning, M Burger - Acta numerica, 2018 - cambridge.org
Regularization methods are a key tool in the solution of inverse problems. They are used to
introduce prior knowledge and allow a robust approximation of ill-posed (pseudo-) inverses …

Don't fall for tuning parameters: tuning-free variable selection in high dimensions with the TREX

J Lederer, C Müller - Proceedings of the AAAI conference on artificial …, 2015 - ojs.aaai.org
Lasso is a popular method for high-dimensional variable selection, but it hinges on a tuning
parameter that is difficult to calibrate in practice. In this study, we introduce TREX, an …

Bias reduction in variational regularization

EM Brinkmann, M Burger, J Rasch, C Sutour - Journal of Mathematical …, 2017 - Springer
The aim of this paper was to introduce and study a two-step debiasing method for variational
regularization. After solving the standard variational problem, the key idea is to add a …

Clear: Covariant least-square refitting with applications to image restoration

CA Deledalle, N Papadakis, J Salmon, S Vaiter - SIAM Journal on Imaging …, 2017 - SIAM
In this paper, we propose a new framework to remove parts of the systematic errors affecting
popular restoration algorithms, with a special focus on image processing tasks. Generalizing …

Linear Regression

J Lederer, J Lederer - Fundamentals of High-Dimensional Statistics: With …, 2022 - Springer
Linear regression relates predictor variables and outcome variables, such as gene copy
numbers and the level of a biomarker. The assumed linearity of the relationships makes the …

Efficient smoothed concomitant lasso estimation for high dimensional regression

E Ndiaye, O Fercoq, A Gramfort… - Journal of Physics …, 2017 - iopscience.iop.org
In high dimensional settings, sparse structures are crucial for efficiency, both in term of
memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce …

Learned extragradient ISTA with interpretable residual structures for sparse coding

Y Li, L Kong, F Shang, Y Liu, H Liu, Z Lin - Proceedings of the AAAI …, 2021 - ojs.aaai.org
Recently, the study on learned iterative shrinkage thresholding algorithm (LISTA) has
attracted increasing attentions. A large number of experiments as well as some theories …

[HTML][HTML] Layer sparsity in neural networks

M Hebiri, J Lederer, M Taheri - Journal of Statistical Planning and Inference, 2025 - Elsevier
Sparsity has become popular in machine learning because it can save computational
resources, facilitate interpretations, and prevent overfitting. This paper discusses sparsity in …

Simulation-selection-extrapolation: estimation in high-dimensional errors-in-variables models

L Nghiem, C Potgieter - Biometrics, 2019 - academic.oup.com
Errors-in-variables models in high-dimensional settings pose two challenges in application.
First, the number of observed covariates is larger than the sample size, while only a small …

On debiasing restoration algorithms: applications to total-variation and nonlocal-means

CA Deledalle, N Papadakis, J Salmon - International Conference on Scale …, 2015 - Springer
Bias in image restoration algorithms can hamper further analysis, typically when the
intensities have a physical meaning of interest, eg, in medical imaging. We propose to …