Gaussian random field approximation via Stein's method with applications to wide random neural networks

K Balasubramanian, L Goldstein, N Ross… - Applied and …, 2024 - Elsevier
We derive upper bounds on the Wasserstein distance (W 1), with respect to sup-norm,
between any continuous R d valued random field indexed by the n-sphere and the …

[PDF][PDF] Non-asymptotic approximations of Gaussian neural networks via second-order Poincaré inequalities

A Bordino, S Favaro, S Fortini - Proceedings of Machine …, 2024 - iris.unibocconi.it
There is a recent and growing literature on large-width asymptotic and non-asymptotic
properties of deep Gaussian neural networks (NNs), namely NNs with weights initialized as …

Wide stable neural networks: Sample regularity, functional convergence and Bayesian inverse problems

T Soto - arXiv preprint arXiv:2407.03909, 2024 - arxiv.org
We study the large-width asymptotics of random fully connected neural networks with
weights drawn from $\alpha $-stable distributions, a family of heavy-tailed distributions …

Implicit compressibility of overparametrized neural networks trained with heavy-tailed SGD

Y Wan, M Barsbey, A Zaidi, U Simsekli - arXiv preprint arXiv:2306.08125, 2023 - arxiv.org
Neural network compression has been an increasingly important subject, not only due to its
practical relevance, but also due to its theoretical implications, as there is an explicit …

Posterior and variational inference for deep neural networks with heavy-tailed weights

I Castillo, P Egels - arXiv preprint arXiv:2406.03369, 2024 - arxiv.org
We consider deep neural networks in a Bayesian framework with a prior distribution
sampling the network weights at random. Following a recent idea of Agapiou and Castillo …

Deep Kernel Posterior Learning under Infinite Variance Prior Weights

J Loría, A Bhadra - arXiv preprint arXiv:2410.01284, 2024 - arxiv.org
Neal (1996) proved that infinitely wide shallow Bayesian neural networks (BNN) converge to
Gaussian processes (GP), when the network weights have bounded prior variance. Cho & …

Deep neural networks with dependent weights: Gaussian process mixture limit, heavy tails, sparsity and compressibility

H Lee, F Ayed, P Jung, J Lee, H Yang… - Journal of Machine …, 2023 - jmlr.org
This article studies the infinite-width limit of deep feedforward neural networks whose
weights are dependent, and modelled via a mixture of Gaussian distributions. Each hidden …

Infinitely wide limits for deep Stable neural networks: sub-linear, linear and super-linear activation functions

A Bordino, S Favaro, S Fortini - arXiv preprint arXiv:2304.04008, 2023 - arxiv.org
There is a growing literature on the study of large-width properties of deep Gaussian neural
networks (NNs), ie deep NNs with Gaussian-distributed parameters or weights, and …

Non-asymptotic approximations of Gaussian neural networks via second-order Poincar\'e inequalities

A Bordino, S Favaro, S Fortini - arXiv preprint arXiv:2304.04010, 2023 - arxiv.org
There is a growing interest on large-width asymptotic properties of Gaussian neural
networks (NNs), namely NNs whose weights are initialized according to Gaussian …

Large-width asymptotics for ReLU neural networks with -Stable initializations

S Favaro, S Fortini, S Peluchetti - arXiv preprint arXiv:2206.08065, 2022 - arxiv.org
There is a recent and growing literature on large-width asymptotic properties of Gaussian
neural networks (NNs), namely NNs whose weights are initialized as Gaussian distributions …