Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

N Halko, PG Martinsson, JA Tropp - SIAM review, 2011 - SIAM
Low-rank matrix approximations, such as the truncated singular value decomposition and
the rank-revealing QR decomposition, play a central role in data analysis and scientific …

[PDF][PDF] Compressive sampling

EJ Candès - Proceedings of the international congress of …, 2006 - academia.edu
Conventional wisdom and common practice in acquisition and reconstruction of images from
frequency data follow the basic principle of the Nyquist density sampling theory. This …

Randomized numerical linear algebra: Foundations and algorithms

PG Martinsson, JA Tropp - Acta Numerica, 2020 - cambridge.org
This survey describes probabilistic algorithms for linear algebraic computations, such as
factorizing matrices and solving linear systems. It focuses on techniques that have a proven …

[图书][B] High-dimensional probability: An introduction with applications in data science

R Vershynin - 2018 - books.google.com
High-dimensional probability offers insight into the behavior of random vectors, random
matrices, random subspaces, and objects used to quantify uncertainty in high dimensions …

Statistical learning with sparsity

T Hastie, R Tibshirani… - Monographs on statistics …, 2015 - api.taylorfrancis.com
In this monograph, we have attempted to summarize the actively developing field of
statistical learning with sparsity. A sparse statistical model is one having only a small …

An introduction to matrix concentration inequalities

JA Tropp - Foundations and Trends® in Machine Learning, 2015 - nowpublishers.com
Random matrices now play a role in many areas of theoretical, applied, and computational
mathematics. Therefore, it is desirable to have tools for studying random matrices that are …

[图书][B] An invitation to compressive sensing

S Foucart, H Rauhut, S Foucart, H Rauhut - 2013 - Springer
This first chapter formulates the objectives of compressive sensing. It introduces the
standard compressive problem studied throughout the book and reveals its ubiquity in many …

On provable benefits of depth in training graph convolutional networks

W Cong, M Ramezani… - Advances in Neural …, 2021 - proceedings.neurips.cc
Abstract Graph Convolutional Networks (GCNs) are known to suffer from performance
degradation as the number of layers increases, which is usually attributed to over …

Harmless interpolation of noisy data in regression

V Muthukumar, K Vodrahalli… - IEEE Journal on …, 2020 - ieeexplore.ieee.org
A continuing mystery in understanding the empirical success of deep neural networks is
their ability to achieve zero training error and generalize well, even when the training data is …

Confidence intervals for low dimensional parameters in high dimensional linear models

CH Zhang, SS Zhang - Journal of the Royal Statistical Society …, 2014 - academic.oup.com
The purpose of this paper is to propose methodologies for statistical inference of low
dimensional parameters with high dimensional data. We focus on constructing confidence …