Learning curves of generic features maps for realistic datasets with a teacher-student model

B Loureiro, C Gerbelot, H Cui, S Goldt… - Advances in …, 2021 - proceedings.neurips.cc
Teacher-student models provide a framework in which the typical-case performance of high-
dimensional supervised learning can be described in closed form. The assumptions of …

Random features for kernel approximation: A survey on algorithms, theory, and beyond

F Liu, X Huang, Y Chen… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
The class of random features is one of the most popular techniques to speed up kernel
methods in large-scale problems. Related works have been recognized by the NeurIPS Test …

[PDF][PDF] Machine learning and the implementable efficient frontier

TI Jensen, BT Kelly, S Malamud… - Swiss Finance Institute …, 2024 - aeaweb.org
We propose that investment strategies should be evaluated based on their net-oftrading-cost
return for each level of risk, which we term the “implementable efficient frontier.” While …

AutoML-GWL: Automated machine learning model for the prediction of groundwater level

A Singh, S Patel, V Bhadani, V Kumar… - … Applications of Artificial …, 2024 - Elsevier
Predicting groundwater levels is pivotal in curbing overexploitation and ensuring effective
water resource governance. However, groundwater level prediction is intricate, driven by …

Generalization error rates in kernel regression: The crossover from the noiseless to noisy regime

H Cui, B Loureiro, F Krzakala… - Advances in Neural …, 2021 - proceedings.neurips.cc
In this manuscript we consider Kernel Ridge Regression (KRR) under the Gaussian design.
Exponents for the decay of the excess generalization error of KRR have been reported in …

Multiple descent: Design your own generalization curve

L Chen, Y Min, M Belkin… - Advances in Neural …, 2021 - proceedings.neurips.cc
This paper explores the generalization loss of linear regression in variably parameterized
families of models, both under-parameterized and over-parameterized. We show that the …

Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression

T Misiakiewicz - arXiv preprint arXiv:2204.10425, 2022 - arxiv.org
We study the spectrum of inner-product kernel matrices, ie, $ n\times n $ matrices with
entries $ h (\langle\textbf {x} _i,\textbf {x} _j\rangle/d) $ where the $(\textbf {x} _i) _ {i\leq n} …

A theoretical analysis of the test error of finite-rank kernel ridge regression

TS Cheng, A Lucchi, A Kratsios… - Advances in Neural …, 2023 - proceedings.neurips.cc
Existing statistical learning guarantees for general kernel regressors often yield loose
bounds when used with finite-rank kernels. Yet, finite-rank kernels naturally appear in a …

Benign overfitting in deep neural networks under lazy training

Z Zhu, F Liu, G Chrysos, F Locatello… - … on Machine Learning, 2023 - proceedings.mlr.press
This paper focuses on over-parameterized deep neural networks (DNNs) with ReLU
activation functions and proves that when the data distribution is well-separated, DNNs can …

Six lectures on linearized neural networks

T Misiakiewicz, A Montanari - arXiv preprint arXiv:2308.13431, 2023 - arxiv.org
In these six lectures, we examine what can be learnt about the behavior of multi-layer neural
networks from the analysis of linear models. We first recall the correspondence between …