We consider the problem of sampling from a target distribution, which is not necessarily log- concave, in the context of empirical risk minimization and stochastic optimization as …
We provide a nonasymptotic analysis of the convergence of the stochastic gradient Hamiltonian Monte Carlo (SGHMC) to a target measure in Wasserstein-2 distance without …
In this paper, we are concerned with a non-asymptotic analysis of sampling algorithms used in nonconvex optimization. In particular, we obtain non-asymptotic estimates in Wasserstein …
We consider the constrained sampling problem where the goal is to sample from a target distribution $\pi (x)\propto e^{-f (x)} $ when $ x $ is constrained to lie on a convex body …
Sampling from probability distributions is an important problem in statistics and machine learning, specially in Bayesian inference when integration with respect to posterior …
Artificial neural networks (ANNs) are typically highly nonlinear systems which are finely tuned via the optimization of their associated, nonconvex loss functions. In many cases, the …
T Johnston, I Lytras, S Sabanis - Journal of Complexity, 2024 - Elsevier
In this article we consider sampling from log concave distributions in Hamiltonian setting, without assuming that the objective gradient is globally Lipschitz. We propose two …
DY Lim, A Neufeld, S Sabanis… - IMA Journal of numerical …, 2024 - academic.oup.com
We consider nonconvex stochastic optimization problems where the objective functions have super-linearly growing and discontinuous stochastic gradients. In such a setting, we …
We study the convergence in total variation and $ V $-norm of discretization schemes of the underdamped Langevin dynamics. Such algorithms are very popular and commonly used in …