Generalization in kernel regression under realistic assumptions

D Barzilai, O Shamir - arXiv preprint arXiv:2312.15995, 2023 - arxiv.org
It is by now well-established that modern over-parameterized models seem to elude the bias-
variance tradeoff and generalize well despite overfitting noise. Many recent works attempt to …

Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension

M Haas, D Holzmüller, U Luxburg… - Advances in Neural …, 2024 - proceedings.neurips.cc
The success of over-parameterized neural networks trained to near-zero training error has
caused great interest in the phenomenon of benign overfitting, where estimators are …

Phase transition in noisy high-dimensional random geometric graphs

S Liu, MZ Rácz - Electronic Journal of Statistics, 2023 - projecteuclid.org
We study the problem of detecting latent geometric structure in random graphs. To this end,
we consider the soft high-dimensional random geometric graph G (n, p, d, q), where each of …

Learning curves for Gaussian process regression with power-law priors and targets

H Jin, PK Banerjee, G Montúfar - arXiv preprint arXiv:2110.12231, 2021 - arxiv.org
We characterize the power-law asymptotics of learning curves for Gaussian process
regression (GPR) under the assumption that the eigenspectrum of the prior and the …

Why shallow networks struggle with approximating and learning high frequency: A numerical study

S Zhang, H Zhao, Y Zhong, H Zhou - arXiv preprint arXiv:2306.17301, 2023 - arxiv.org
In this work, a comprehensive numerical study involving analysis and experiments shows
why a two-layer neural network has difficulties handling high frequencies in approximation …

Entrywise error bounds for low-rank approximations of kernel matrices

A Modell - arXiv preprint arXiv:2405.14494, 2024 - arxiv.org
In this paper, we derive entrywise error bounds for low-rank approximations of kernel
matrices obtained using the truncated eigen-decomposition (or singular value …

[图书][B] Generalization of Wide Neural Networks from the Perspective of Linearization and Kernel Learning

H Jin - 2022 - search.proquest.com
Recently people showed that wide neural networks can be approximated by linear models
under gradient descent [JGH18a, LXS19a]. In this dissertation we study generalization of …