Learning deep kernels for non-parametric two-sample tests

F Liu, W Xu, J Lu, G Zhang, A Gretton… - International …, 2020 - proceedings.mlr.press
We propose a class of kernel-based two-sample tests, which aim to determine whether two
sets of samples are drawn from the same distribution. Our tests are constructed from kernels …

Convergence of flow-based generative models via proximal gradient descent in Wasserstein space

X Cheng, J Lu, Y Tan, Y Xie - arXiv preprint arXiv:2310.17582, 2023 - arxiv.org
Flow-based generative models enjoy certain advantages in computing the data generation
and the likelihood, and have recently shown competitive empirical performance. Compared …

Sequential predictive two-sample and independence testing

A Podkopaev, A Ramdas - Advances in neural information …, 2023 - proceedings.neurips.cc
We study the problems of sequential nonparametric two-sample and independence testing.
Sequential tests process data online and allow using observed data to decide whether to …

Neural tangent kernel maximum mean discrepancy

X Cheng, Y Xie - Advances in Neural Information …, 2021 - proceedings.neurips.cc
We present a novel neural network Maximum Mean Discrepancy (MMD) statistic by
identifying a new connection between neural tangent kernel (NTK) and MMD. This …

A deep network construction that adapts to intrinsic dimensionality beyond the domain

A Cloninger, T Klock - Neural Networks, 2021 - Elsevier
We study the approximation of two-layer compositions f (x)= g (ϕ (x)) via deep networks with
ReLU activation, where ϕ is a geometrically intuitive, dimensionality reducing feature map …

[PDF][PDF] Comparing distributions by measuring differences that affect decision making

S Zhao, A Sinha, Y He, A Perreault, J Song… - … Conference on Learning …, 2022 - par.nsf.gov
Measuring the discrepancy between two probability distributions is a fundamental problem
in machine learning and statistics. We propose a new class of discrepancies based on the …

AutoML two-sample test

JM Kübler, V Stimper, S Buchholz… - Advances in …, 2022 - proceedings.neurips.cc
Two-sample tests are important in statistics and machine learning, both as tools for scientific
discovery as well as to detect distribution shifts. This led to the development of many …

R-divergence for estimating model-oriented distribution discrepancy

Z Zhao, L Cao - Advances in Neural Information Processing …, 2023 - proceedings.neurips.cc
Real-life data are often non-IID due to complex distributions and interactions, and the
sensitivity to the distribution of samples can differ among learning models. Accordingly, a …

Convergence of flow-based generative models via proximal gradient descent in wasserstein space

X Cheng, J Lu, Y Tan, Y Xie - IEEE Transactions on Information …, 2024 - ieeexplore.ieee.org
Flow-based generative models enjoy certain advantages in computing the data generation
and the likelihood, and have recently shown competitive empirical performance. Compared …

E-valuating classifier two-sample tests

T Pandeva, T Bakker, CA Naesseth, P Forré - arXiv preprint arXiv …, 2022 - arxiv.org
We propose E-C2ST, a classifier two-sample test for high-dimensional data based on E-
values. Compared to $ p $-values-based tests, tests with E-values have finite sample …