Embrace rejection: Kernel matrix approximation by accelerated randomly pivoted Cholesky

EN Epperly, JA Tropp, RJ Webber - arXiv preprint arXiv:2410.03969, 2024 - arxiv.org
Randomly pivoted Cholesky (RPCholesky) is an algorithm for constructing a low-rank
approximation of a positive-semidefinite matrix using a small number of columns. This paper …

Adaptive batch sizes for active learning: A probabilistic numerics approach

M Adachi, S Hayakawa, M Jørgensen… - International …, 2024 - proceedings.mlr.press
Active learning parallelization is widely used, but typically relies on fixing the batch size
throughout experimentation. This fixed approach is inefficient because of a dynamic trade-off …

Column and row subset selection using nuclear scores: algorithms and theory for Nystr\"{o} m approximation, CUR decomposition, and graph Laplacian reduction

M Fornace, M Lindsey - arXiv preprint arXiv:2407.01698, 2024 - arxiv.org
Column selection is an essential tool for structure-preserving low-rank approximation, with
wide-ranging applications across many fields, such as data science, machine learning, and …

Debiased Distribution Compression

L Li, R Dwivedi, L Mackey - arXiv preprint arXiv:2404.12290, 2024 - arxiv.org
Modern compression methods can summarize a target distribution $\mathbb {P} $ more
succinctly than iid sampling but require access to a low-bias input sequence like a Markov …

Policy Gradient with Kernel Quadrature

S Hayakawa, T Morimura - arXiv preprint arXiv:2310.14768, 2023 - arxiv.org
Reward evaluation of episodes becomes a bottleneck in a broad range of reinforcement
learning tasks. Our aim in this paper is to select a small but representative subset of a large …

Random convex hulls and kernel quadrature

S Hayakawa - 2023 - ora.ox.ac.uk
Discretization of probability measures is ubiquitous in the field of applied mathematics, from
classical numerical integration to data compression and algorithmic acceleration in machine …