The uncanny ability of over-parameterised neural networks to generalise well has been explained using various" simplicity biases". These theories postulate that neural networks …
In this manuscript we consider the problem of generalized linear estimation on Gaussian mixture data with labels given by a single-index model. Our first result is a sharp asymptotic …
A recent line of work in high-dimensional statistics working under the Gaussian mixture hypothesis has led to a number of results in the context of empirical risk minimization …
C Gerbelot, R Berthier - Information and Inference: A Journal of …, 2023 - academic.oup.com
Approximate message passing (AMP) algorithms have become an important element of high- dimensional statistical inference, mostly due to their adaptability and concentration …
H Cui, L Zdeborová - Advances in Neural Information …, 2024 - proceedings.neurips.cc
We address the problem of denoising data from a Gaussian mixture using a two-layer non- linear autoencoder with tied weights and a skip connection. We consider the high …
From the sampling of data to the initialisation of parameters, randomness is ubiquitous in modern Machine Learning practice. Understanding the statistical fluctuations engendered …
Uncertainty quantification is a central challenge in reliable and trustworthy machine learning. Naive measures such as last-layer scores are well-known to yield overconfident …
In this manuscript we investigate the problem of how two-layer neural networks learn features from data, and improve over the kernel regime, after being trained with a single …
K Tan, PC Bellec - Advances in Neural Information …, 2024 - proceedings.neurips.cc
This paper investigates the asymptotic distribution of the maximum-likelihood estimate (MLE) in multinomial logistic models in the high-dimensional regime where dimension and …