Most of the currently used techniques for linear system identification are based on classical estimation paradigms coming from mathematical statistics. In particular, maximum likelihood …
The goal of learning theory is to approximate a function from sample values. To attain this goal learning theory draws on a variety of diverse subjects, specifically statistics …
S Smale, DX Zhou - Constructive approximation, 2007 - Springer
The regression problem in learning theory is investigated with least square Tikhonov regularization schemes in reproducing kernel Hilbert spaces (RKHS). We follow our …
SB Lin, X Guo, DX Zhou - Journal of Machine Learning Research, 2017 - jmlr.org
We study distributed learning with the least squares regularization scheme in a reproducing kernel Hilbert space (RKHS). By a divide-and-conquer approach, the algorithm partitions a …
Y Fan, S Lyu, Y Ying, B Hu - Advances in neural information …, 2017 - proceedings.neurips.cc
In this work, we introduce the average top-$ k $(\atk) loss as a new ensemble loss for supervised learning. The\atk loss provides a natural generalization of the two widely used …
S Hu, X Wang, S Lyu - IEEE Transactions on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Recent works have revealed an essential paradigm in designing loss functions that differentiate individual losses versus aggregate losses. The individual loss measures the …
In this paper, we study regression problems over a separable Hilbert space with the square loss, covering non-parametric regression over a reproducing kernel Hilbert space. We …
Under mild assumptions on the kernel, we obtain the best known error rates in a regularized learning scenario taking place in the corresponding reproducing kernel Hilbert space …
We discuss how a large class of regularization methods, collectively known as spectral regularization and originally designed for solving ill-posed inverse problems, gives rise to …