System identification: A machine learning perspective

A Chiuso, G Pillonetto - Annual Review of Control, Robotics, and …, 2019 - annualreviews.org
Estimation of functions from sparse and noisy data is a central theme in machine learning. In
the last few years, many algorithms have been developed that exploit Tikhonov …

Kernel methods in system identification, machine learning and function estimation: A survey

G Pillonetto, F Dinuzzo, T Chen, G De Nicolao, L Ljung - Automatica, 2014 - Elsevier
Most of the currently used techniques for linear system identification are based on classical
estimation paradigms coming from mathematical statistics. In particular, maximum likelihood …

[图书][B] Learning theory: an approximation theory viewpoint

F Cucker, DX Zhou - 2007 - books.google.com
The goal of learning theory is to approximate a function from sample values. To attain this
goal learning theory draws on a variety of diverse subjects, specifically statistics …

Learning theory estimates via integral operators and their approximations

S Smale, DX Zhou - Constructive approximation, 2007 - Springer
The regression problem in learning theory is investigated with least square Tikhonov
regularization schemes in reproducing kernel Hilbert spaces (RKHS). We follow our …

Distributed learning with regularized least squares

SB Lin, X Guo, DX Zhou - Journal of Machine Learning Research, 2017 - jmlr.org
We study distributed learning with the least squares regularization scheme in a reproducing
kernel Hilbert space (RKHS). By a divide-and-conquer approach, the algorithm partitions a …

Learning with average top-k loss

Y Fan, S Lyu, Y Ying, B Hu - Advances in neural information …, 2017 - proceedings.neurips.cc
In this work, we introduce the average top-$ k $(\atk) loss as a new ensemble loss for
supervised learning. The\atk loss provides a natural generalization of the two widely used …

Rank-based decomposable losses in machine learning: A survey

S Hu, X Wang, S Lyu - IEEE Transactions on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Recent works have revealed an essential paradigm in designing loss functions that
differentiate individual losses versus aggregate losses. The individual loss measures the …

Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces

J Lin, A Rudi, L Rosasco, V Cevher - Applied and Computational Harmonic …, 2020 - Elsevier
In this paper, we study regression problems over a separable Hilbert space with the square
loss, covering non-parametric regression over a reproducing kernel Hilbert space. We …

Regularization in kernel learning

S Mendelson, J Neeman - 2010 - projecteuclid.org
Under mild assumptions on the kernel, we obtain the best known error rates in a regularized
learning scenario taking place in the corresponding reproducing kernel Hilbert space …

Spectral algorithms for supervised learning

LL Gerfo, L Rosasco, F Odone, ED Vito, A Verri - Neural Computation, 2008 - direct.mit.edu
We discuss how a large class of regularization methods, collectively known as spectral
regularization and originally designed for solving ill-posed inverse problems, gives rise to …