Structured logconcave sampling with a restricted Gaussian oracle

YT Lee, R Shen, K Tian - Conference on Learning Theory, 2021 - proceedings.mlr.press
We give algorithms for sampling several structured logconcave families to high accuracy.
We further develop a reduction framework, inspired by proximal point methods in convex …

On stochastic gradient langevin dynamics with dependent data streams: The fully nonconvex case

NH Chau, É Moulines, M Rásonyi, S Sabanis… - SIAM Journal on …, 2021 - SIAM
We consider the problem of sampling from a target distribution, which is not necessarily log-
concave, in the context of empirical risk minimization and stochastic optimization as …

Nonasymptotic analysis of Stochastic Gradient Hamiltonian Monte Carlo under local conditions for nonconvex optimization

OD Akyildiz, S Sabanis - Journal of Machine Learning Research, 2024 - jmlr.org
We provide a nonasymptotic analysis of the convergence of the stochastic gradient
Hamiltonian Monte Carlo (SGHMC) to a target measure in Wasserstein-2 distance without …

Nonasymptotic estimates for stochastic gradient Langevin dynamics under local conditions in nonconvex optimization

Y Zhang, ÖD Akyildiz, T Damoulas… - Applied Mathematics & …, 2023 - Springer
In this paper, we are concerned with a non-asymptotic analysis of sampling algorithms used
in nonconvex optimization. In particular, we obtain non-asymptotic estimates in Wasserstein …

Penalized Overdamped and Underdamped Langevin Monte Carlo Algorithms for Constrained Sampling

M Gurbuzbalaban, Y Hu, L Zhu - Journal of Machine Learning Research, 2024 - jmlr.org
We consider the constrained sampling problem where the goal is to sample from a target
distribution $\pi (x)\propto e^{-f (x)} $ when $ x $ is constrained to lie on a convex body …

[PDF][PDF] Schrödinger-Föllmer sampler: sampling without ergodicity

J Huang, Y Jiao, L Kang, X Liao, J Liu… - arXiv preprint arXiv …, 2021 - researchgate.net
Sampling from probability distributions is an important problem in statistics and machine
learning, specially in Bayesian inference when integration with respect to posterior …

Taming neural networks with tusla: Nonconvex learning via adaptive stochastic gradient langevin algorithms

A Lovas, I Lytras, M Rásonyi, S Sabanis - SIAM Journal on Mathematics of …, 2023 - SIAM
Artificial neural networks (ANNs) are typically highly nonlinear systems which are finely
tuned via the optimization of their associated, nonconvex loss functions. In many cases, the …

[HTML][HTML] Kinetic Langevin MCMC Sampling Without Gradient Lipschitz Continuity-the Strongly Convex Case

T Johnston, I Lytras, S Sabanis - Journal of Complexity, 2024 - Elsevier
In this article we consider sampling from log concave distributions in Hamiltonian setting,
without assuming that the objective gradient is globally Lipschitz. We propose two …

Non-asymptotic estimates for TUSLA algorithm for non-convex learning with applications to neural networks with ReLU activation function

DY Lim, A Neufeld, S Sabanis… - IMA Journal of numerical …, 2024 - academic.oup.com
We consider nonconvex stochastic optimization problems where the objective functions
have super-linearly growing and discontinuous stochastic gradients. In such a setting, we …

Uniform minorization condition and convergence bounds for discretizations of kinetic Langevin dynamics

A Durmus, A Enfroy, É Moulines, G Stoltz - arXiv preprint arXiv:2107.14542, 2021 - arxiv.org
We study the convergence in total variation and $ V $-norm of discretization schemes of the
underdamped Langevin dynamics. Such algorithms are very popular and commonly used in …