Improved analysis for a proximal algorithm for sampling

Y Chen, S Chewi, A Salim… - Conference on Learning …, 2022 - proceedings.mlr.press
We study the proximal sampler of Lee, Shen, and Tian (2021) and obtain new convergence
guarantees under weaker assumptions than strong log-concavity: namely, our results hold …

Efficient constrained sampling via the mirror-Langevin algorithm

K Ahn, S Chewi - Advances in Neural Information …, 2021 - proceedings.neurips.cc
We propose a new discretization of the mirror-Langevin diffusion and give a crisp proof of its
convergence. Our analysis uses relative convexity/smoothness and self-concordance, ideas …

Improved dimension dependence of a proximal algorithm for sampling

J Fan, B Yuan, Y Chen - The Thirty Sixth Annual Conference …, 2023 - proceedings.mlr.press
We propose a sampling algorithm that achieves superior complexity bounds in all the
classical settings (strongly log-concave, log-concave, Logarithmic-Sobolev inequality (LSI) …

Resolving the mixing time of the Langevin algorithm to its stationary distribution for log-concave sampling

JM Altschuler, K Talwar - arXiv preprint arXiv:2210.08448, 2022 - arxiv.org
Sampling from a high-dimensional distribution is a fundamental task in statistics,
engineering, and the sciences. A canonical approach is the Langevin Algorithm, ie, the …

Query lower bounds for log-concave sampling

S Chewi, J de Dios Pont, J Li, C Lu, S Narayanan - Journal of the ACM, 2024 - dl.acm.org
Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but
the corresponding problem of proving lower bounds for this task has remained elusive, with …

KALE flow: A relaxed KL gradient flow for probabilities with disjoint support

P Glaser, M Arbel, A Gretton - Advances in Neural …, 2021 - proceedings.neurips.cc
We study the gradient flow for a relaxed approximation to the Kullback-Leibler (KL)
divergencebetween a moving source and a fixed target distribution. This approximation …

Particle algorithms for maximum likelihood training of latent variable models

J Kuntz, JN Lim, AM Johansen - International Conference on …, 2023 - proceedings.mlr.press
Neal and Hinton (1998) recast maximum likelihood estimation of any given latent variable
model as the minimization of a free energy functional F, and the EM algorithm as coordinate …

A convergence theory for SVGD in the population limit under Talagrand's inequality T1

A Salim, L Sun, P Richtarik - International Conference on …, 2022 - proceedings.mlr.press
Abstract Stein Variational Gradient Descent (SVGD) is an algorithm for sampling from a
target density which is known up to a multiplicative constant. Although SVGD is a popular …

Sampling from structured log-concave distributions via a soft-threshold dikin walk

O Mangoubi, NK Vishnoi - Advances in Neural Information …, 2023 - proceedings.neurips.cc
Given a Lipschitz or smooth convex function $ f: K\to\mathbb {R}^ d $ for a bounded polytope
$ K:= ${$\theta\in\mathbb {R}^ d: A\theta\leq b $}, where $ A\in\mathbb {R}^{m\times d} $ and …

Penalized Overdamped and Underdamped Langevin Monte Carlo Algorithms for Constrained Sampling

M Gurbuzbalaban, Y Hu, L Zhu - Journal of Machine Learning Research, 2024 - jmlr.org
We consider the constrained sampling problem where the goal is to sample from a target
distribution $\pi (x)\propto e^{-f (x)} $ when $ x $ is constrained to lie on a convex body …