Two-level ℓ1 minimization for compressed sensing

X Huang, Y Liu, L Shi, S Van Huffel, JAK Suykens - Signal Processing, 2015 - Elsevier
Compressed sensing using ℓ 1 minimization has been widely and successfully applied. To
further enhance the sparsity, a non-convex and piecewise linear penalty is proposed. This …

Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks

F Liu, L Dadi, V Cevher - Journal of Machine Learning Research, 2024 - jmlr.org
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space
to model functions by neural networks as the curse of dimensionality (CoD) cannot be …

Low rank optimization for efficient deep learning: Making a balance between compact architecture and fast training

X Ou, Z Chen, C Zhu, Y Liu - Journal of Systems Engineering …, 2023 - ieeexplore.ieee.org
Deep neural networks (DNNs) have achieved great success in many data processing
applications. However, high computational complexity and storage cost make deep learning …

Exploring Kernel Machines and Support Vector Machines: Principles, Techniques, and Future Directions.

KL Du, B Jiang, J Lu, J Hua… - Mathematics (2227 …, 2024 - search.ebscohost.com
The kernel method is a tool that converts data to a kernel space where operation can be
performed. When converted to a high-dimensional feature space by using kernel functions …

Iterative kernel regression with preconditioning

L Shi, Z Zhang - Analysis and Applications, 2024 - scholars.cityu.edu.hk
Kernel methods are popular in nonlinear and nonparametric regression due to their solid
mathematical foundations and optimal statistical properties. However, scalability remains the …

Learning Analysis of Kernel Ridgeless Regression with Asymmetric Kernel Learning

F He, M He, L Shi, X Huang, JAK Suykens - arXiv preprint arXiv …, 2024 - arxiv.org
Ridgeless regression has garnered attention among researchers, particularly in light of
the``Benign Overfitting''phenomenon, where models interpolating noisy samples …

Sparse online regression algorithm with insensitive loss functions

T Hu, J Xiong - Journal of Multivariate Analysis, 2024 - Elsevier
Online learning is an efficient approach in machine learning and statistics, which iteratively
updates models upon the observation of a sequence of training examples. A representative …

Which Spaces can be Embedded in -type Reproducing Kernel Banach Space? A Characterization via Metric Entropy

Y Lu, D Lin, Q Du - arXiv preprint arXiv:2410.11116, 2024 - arxiv.org
In this paper, we establish a novel connection between the metric entropy growth and the
embeddability of function spaces into reproducing kernel Hilbert/Banach spaces. Metric …

Sparse kernel sufficient dimension reduction

B Liu, L Xue - Journal of Nonparametric Statistics, 2024 - Taylor & Francis
The sufficient dimension reduction (SDR) with sparsity has received much attention for
analysing high-dimensional data. We study a nonparametric sparse kernel sufficient …

Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications

SB Lin, S Tang, Y Wang… - INFORMS Journal on …, 2022 - pubsonline.informs.org
Ensemble learning methods, such as boosting, focus on producing a strong classifier based
on numerous weak classifiers. In this paper, we develop a novel ensemble learning method …