Neural networks in the lazy training regime converge to kernel machines. Can neural networks in the rich feature learning regime learn a kernel machine with a data-dependent …
Generalised linear models for multi-class classification problems are one of the fundamental building blocks of modern machine learning tasks. In this manuscript, we characterise the …
In this manuscript we consider Kernel Ridge Regression (KRR) under the Gaussian design. Exponents for the decay of the excess generalization error of KRR have been reported in …
C Gerbelot, R Berthier - Information and Inference: A Journal of …, 2023 - academic.oup.com
Approximate message passing (AMP) algorithms have become an important element of high- dimensional statistical inference, mostly due to their adaptability and concentration …
From the sampling of data to the initialisation of parameters, randomness is ubiquitous in modern Machine Learning practice. Understanding the statistical fluctuations engendered …
Sharp global convergence guarantees for iterative nonconvex optimization with random data Page 1 The Annals of Statistics 2023, Vol. 51, No. 1, 179–210 https://doi.org/10.1214/22-AOS2246 …
D Bosch, A Panahi, B Hassibi - The Thirty Sixth Annual …, 2023 - proceedings.mlr.press
We provide exact asymptotic expressions for the performance of regression by an $ L-$ layer deep random feature (RF) model, where the input is mapped through multiple random …
A Decelle - Physica A: Statistical Mechanics and its Applications, 2022 - Elsevier
The recent progresses in Machine Learning opened the door to actual applications of learning algorithms but also to new research directions both in the field of Machine Learning …
A Bodin, N Macris - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Recent evidence has shown the existence of a so-called double-descent and even triple- descent behavior for the generalization error of deep-learning models. This important …