Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions

S Chen, S Chewi, J Li, Y Li, A Salim… - arXiv preprint arXiv …, 2022 - arxiv.org
We provide theoretical convergence guarantees for score-based generative models (SGMs)
such as denoising diffusion probabilistic models (DDPMs), which constitute the backbone of …

Score-based generative modeling with critically-damped langevin diffusion

T Dockhorn, A Vahdat, K Kreis - arXiv preprint arXiv:2112.07068, 2021 - arxiv.org
Score-based generative models (SGMs) have demonstrated remarkable synthesis quality.
SGMs rely on a diffusion process that gradually perturbs the data towards a tractable …

Variational inference via Wasserstein gradient flows

M Lambert, S Chewi, F Bach… - Advances in Neural …, 2022 - proceedings.neurips.cc
Abstract Along with Markov chain Monte Carlo (MCMC) methods, variational inference (VI)
has emerged as a central computational approach to large-scale Bayesian inference …

Rapid convergence of the unadjusted langevin algorithm: Isoperimetry suffices

S Vempala, A Wibisono - Advances in neural information …, 2019 - proceedings.neurips.cc
Abstract We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability
distribution $\nu= e^{-f} $ on $\R^ n $. We prove a convergence guarantee in Kullback …

Towards a theory of non-log-concave sampling: first-order stationarity guarantees for langevin monte carlo

K Balasubramanian, S Chewi… - … on Learning Theory, 2022 - proceedings.mlr.press
For the task of sampling from a density $\pi\propto\exp (-V) $ on $\R^ d $, where $ V $ is
possibly non-convex but $ L $-gradient Lipschitz, we prove that averaged Langevin Monte …

Faster high-accuracy log-concave sampling via algorithmic warm starts

JM Altschuler, S Chewi - Journal of the ACM, 2024 - dl.acm.org
It is a fundamental problem to understand the complexity of high-accuracy sampling from a
strongly log-concave density π on ℝ d. Indeed, in practice, high-accuracy samplers such as …

Sampling can be faster than optimization

YA Ma, Y Chen, C Jin… - Proceedings of the …, 2019 - National Acad Sciences
Optimization algorithms and Monte Carlo sampling algorithms have provided the
computational foundations for the rapid growth in applications of statistical machine learning …

Improved analysis for a proximal algorithm for sampling

Y Chen, S Chewi, A Salim… - Conference on Learning …, 2022 - proceedings.mlr.press
We study the proximal sampler of Lee, Shen, and Tian (2021) and obtain new convergence
guarantees under weaker assumptions than strong log-concavity: namely, our results hold …

The randomized midpoint method for log-concave sampling

R Shen, YT Lee - Advances in Neural Information …, 2019 - proceedings.neurips.cc
Sampling from log-concave distributions is a well researched problem that has many
applications in statistics and machine learning. We study the distributions of the form …

Improved discretization analysis for underdamped Langevin Monte Carlo

S Zhang, S Chewi, M Li… - The Thirty Sixth …, 2023 - proceedings.mlr.press
Abstract Underdamped Langevin Monte Carlo (ULMC) is an algorithm used to sample from
unnormalized densities by leveraging the momentum of a particle moving in a potential well …