Smoothing the landscape boosts the signal for sgd: Optimal sample complexity for learning single index models

A Damian, E Nichani, R Ge… - Advances in Neural …, 2024 - proceedings.neurips.cc
We focus on the task of learning a single index model $\sigma (w^\star\cdot x) $ with respect
to the isotropic Gaussian distribution in $ d $ dimensions. Prior work has shown that the …

High-dimensional limit theorems for sgd: Effective dynamics and critical scaling

G Ben Arous, R Gheissari… - Advances in Neural …, 2022 - proceedings.neurips.cc
We study the scaling limits of stochastic gradient descent (SGD) with constant step-size in
the high-dimensional regime. We prove limit theorems for the trajectories of summary …

Notes on computational hardness of hypothesis testing: Predictions using the low-degree likelihood ratio

D Kunisky, AS Wein, AS Bandeira - ISAAC Congress (International Society …, 2019 - Springer
These notes survey and explore an emerging method, which we call the low-degree
method, for understanding statistical-versus-computational tradeoffs in high-dimensional …

[图书][B] Bayesian non-linear statistical inverse problems

R Nickl - 2023 - statslab.cam.ac.uk
Mathematics in Zurich has a long and distinguished tradition, in which the writing of lecture
notes volumes and research monographs plays a prominent part. The Zurich Lectures in …

The Franz-Parisi criterion and computational trade-offs in high dimensional statistics

AS Bandeira, A El Alaoui, S Hopkins… - Advances in …, 2022 - proceedings.neurips.cc
Many high-dimensional statistical inference problems are believed to possess inherent
computational hardness. Various frameworks have been proposed to give rigorous …

Reducibility and statistical-computational gaps from secret leakage

M Brennan, G Bresler - Conference on Learning Theory, 2020 - proceedings.mlr.press
Inference problems with conjectured statistical-computational gaps are ubiquitous
throughout modern statistics, computer science, statistical physics and discrete probability …

Phase diagram of stochastic gradient descent in high-dimensional two-layer neural networks

R Veiga, L Stephan, B Loureiro… - Advances in …, 2022 - proceedings.neurips.cc
Despite the non-convex optimization landscape, over-parametrized shallow networks are
able to achieve global convergence under gradient descent. The picture can be radically …

A precise high-dimensional asymptotic theory for boosting and minimum--norm interpolated classifiers

T Liang, P Sur - The Annals of Statistics, 2022 - projecteuclid.org
A precise high-dimensional asymptotic theory for boosting and minimum-l1-norm
interpolated classifiers Page 1 The Annals of Statistics 2022, Vol. 50, No. 3, 1669–1695 …

Computational barriers to estimation from low-degree polynomials

T Schramm, AS Wein - The Annals of Statistics, 2022 - projecteuclid.org
Computational barriers to estimation from low-degree polynomials Page 1 The Annals of
Statistics 2022, Vol. 50, No. 3, 1833–1858 https://doi.org/10.1214/22-AOS2179 © Institute of …

Statistical query algorithms and low-degree tests are almost equivalent

M Brennan, G Bresler, SB Hopkins, J Li… - arXiv preprint arXiv …, 2020 - arxiv.org
Researchers currently use a number of approaches to predict and substantiate information-
computation gaps in high-dimensional statistical estimation problems. A prominent …