Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions

S Chen, S Chewi, J Li, Y Li, A Salim… - arXiv preprint arXiv …, 2022 - arxiv.org
We provide theoretical convergence guarantees for score-based generative models (SGMs)
such as denoising diffusion probabilistic models (DDPMs), which constitute the backbone of …

The probability flow ode is provably fast

S Chen, S Chewi, H Lee, Y Li, J Lu… - Advances in Neural …, 2024 - proceedings.neurips.cc
We provide the first polynomial-time convergence guarantees for the probabilistic flow ODE
implementation (together with a corrector step) of score-based generative modeling. Our …

Linear convergence bounds for diffusion models via stochastic localization

J Benton, V De Bortoli, A Doucet… - arXiv preprint arXiv …, 2023 - arxiv.org
Diffusion models are a powerful method for generating approximate samples from high-
dimensional data distributions. Several recent results have provided polynomial bounds on …

Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices

S Vempala, A Wibisono - Advances in neural information …, 2019 - proceedings.neurips.cc
Abstract We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability
distribution $\nu= e^{-f} $ on $\R^ n $. We prove a convergence guarantee in Kullback …

Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein space

MZ Diao, K Balasubramanian… - … on Machine Learning, 2023 - proceedings.mlr.press
Variational inference (VI) seeks to approximate a target distribution $\pi $ by an element of a
tractable family of distributions. Of key interest in statistics and machine learning is Gaussian …

Faster high-accuracy log-concave sampling via algorithmic warm starts

JM Altschuler, S Chewi - Journal of the ACM, 2024 - dl.acm.org
It is a fundamental problem to understand the complexity of high-accuracy sampling from a
strongly log-concave density π on ℝ d. Indeed, in practice, high-accuracy samplers such as …

Deep networks as denoising algorithms: Sample-efficient learning of diffusion models in high-dimensional graphical models

S Mei, Y Wu - arXiv preprint arXiv:2309.11420, 2023 - arxiv.org
We investigate the approximation efficiency of score functions by deep neural networks in
diffusion-based generative modeling. While existing approximation theories utilize the …

Bourgain's slicing problem and KLS isoperimetry up to polylog

B Klartag, J Lehec - Geometric and functional analysis, 2022 - Springer
Bourgain’s slicing problem and KLS isoperimetry up to polylog | Geometric and Functional
Analysis Skip to main content SpringerLink Account Menu Find a journal Publish with us Track …

Improved discretization analysis for underdamped Langevin Monte Carlo

S Zhang, S Chewi, M Li… - The Thirty Sixth …, 2023 - proceedings.mlr.press
Abstract Underdamped Langevin Monte Carlo (ULMC) is an algorithm used to sample from
unnormalized densities by leveraging the momentum of a particle moving in a potential well …

[PDF][PDF] Statistical optimal transport

S Chewi, J Niles-Weed, P Rigollet - arXiv preprint arXiv:2407.18163, 2024 - arxiv.org
Statistical Optimal Transport arXiv:2407.18163v2 [math.ST] 7 Nov 2024 Page 1 Statistical
Optimal Transport Sinho Chewi Yale Jonathan Niles-Weed NYU Philippe Rigollet MIT …