Recently, Optimal Transport has been proposed as a probabilistic framework in Machine Learning for comparing and manipulating probability distributions. This is rooted in its rich …
Optimal transport distances have found many applications in machine learning for their capacity to compare non-parametric probability distributions. Yet their algorithmic complexity …
Optimal transport (OT) theory describes general principles to define and select, among many possible choices, the most efficient way to map a probability measure onto another. That …
D Alvarez-Melis, N Fusi - Advances in Neural Information …, 2020 - proceedings.neurips.cc
The notion of task similarity is at the core of various machine learning paradigms, such as domain adaptation and meta-learning. Current methods to quantify it are often heuristic …
The squared Wasserstein distance is a natural quantity to compare probability distributions in a non-parametric setting. This quantity is usually estimated with the plug-in estimator …
Generative adversarial networks (GANs) have been extremely successful in generating samples, from seemingly high dimensional probability measures. However, these methods …
G Mena, J Niles-Weed - Advances in neural information …, 2019 - proceedings.neurips.cc
We prove several fundamental statistical bounds for entropic OT with the squared Euclidean cost between subgaussian probability measures in arbitrary dimension. First, through a new …
We develop a computationally tractable method for estimating the optimal map between two distributions over $\mathbb {R}^ d $ with rigorous finite-sample guarantees. Leveraging an …
T Uscidda, M Cuturi - International Conference on Machine …, 2023 - proceedings.mlr.press
Optimal transport (OT) theory has been used in machine learning to study and characterize maps that can push-forward efficiently a probability measure onto another. Recent works …