Few-shot data-driven algorithms for low rank approximation

P Indyk, T Wagner, D Woodruff - Advances in Neural …, 2021 - proceedings.neurips.cc
Recently, data-driven and learning-based algorithms for low rank matrix approximation were
shown to outperform classical data-oblivious algorithms by wide margins in terms of …

Projection-cost-preserving sketches: Proof strategies and constructions

C Musco, C Musco - arXiv preprint arXiv:2004.08434, 2020 - arxiv.org
In this note we illustrate how common matrix approximation methods, such as random
projection and random sampling, yield projection-cost-preserving sketches, as introduced in …

Fast and Low-Memory Compressive Sensing Algorithms for Low Tucker-Rank Tensor Approximation from Streamed Measurements

C Haselby, MA Iwen, D Needell, E Rebrova… - arXiv preprint arXiv …, 2023 - arxiv.org
In this paper we consider the problem of recovering a low-rank Tucker approximation to a
massive tensor based solely on structured random compressive measurements. Crucially …

[图书][B] On Methods in Tensor Recovery and Completion

C Haselby - 2023 - search.proquest.com
Tensor representations of data have great promise, since as the size of data grows both in
terms of dimensionality and modes, it becomes increasingly advantageous to employ …

Sublinear Algorithms for Matrices: Theory and Applications

A Ray - 2024 - scholarworks.umass.edu
Matrices are ubiquitous mathematical structures that arise throughout computer science. We
study fast algorithms for several central problems involving matrices, including eigenvalue …

Faster Matrix Algorithms Via Randomized Sketching & Preconditioning

A Chowdhury - 2021 - search.proquest.com
Recently, in statistics and machine learning, the notion of Randomization in Numerical
Linear Algebra (RandNLA) has not only evolved as a vital new tool to design fast and …

[PDF][PDF] Few-Shot Data-Driven Algorithms for Low Rank Approximation: Supplementary Material

P Indyk, T Wagner, DP Woodruff - proceedings.neurips.cc
Let us restate the theorem. Recapping notation, let A∈ Rn× d be an input matrix. Let a1,...,
an denote its rows. Let k be the target low rank, and let ϵ> 0 be an error parameter. Let S0∈ …