Forward-backward Gaussian variational inference via JKO in the Bures-Wasserstein space

MZ Diao, K Balasubramanian… - … on Machine Learning, 2023 - proceedings.mlr.press
Variational inference (VI) seeks to approximate a target distribution $\pi $ by an element of a
tractable family of distributions. Of key interest in statistics and machine learning is Gaussian …

The Wasserstein proximal gradient algorithm

A Salim, A Korba, G Luise - Advances in Neural Information …, 2020 - proceedings.neurips.cc
Wasserstein gradient flows are continuous time dynamics that define curves of steepest
descent to minimize an objective function over the space of probability measures (ie, the …

Primal dual interpretation of the proximal stochastic gradient Langevin algorithm

A Salim, P Richtarik - Advances in Neural Information …, 2020 - proceedings.neurips.cc
We consider the task of sampling with respect to a log concave probability distribution. The
potential of the target distribution is assumed to be composite, ie, written as the sum of a …

Stochastic proximal langevin algorithm: Potential splitting and nonasymptotic rates

A Salim, D Kovalev, P Richtárik - Advances in Neural …, 2019 - proceedings.neurips.cc
We propose a new algorithm---Stochastic Proximal Langevin Algorithm (SPLA)---for
sampling from a log concave distribution. Our method is a generalization of the Langevin …

The proximal robbins–monro method

P Toulis, T Horel, EM Airoldi - Journal of the Royal Statistical …, 2021 - academic.oup.com
The need for statistical estimation with large data sets has reinvigorated interest in iterative
procedures and stochastic optimization. Stochastic approximations are at the forefront of this …

Almost surely constrained convex optimization

O Fercoq, A Alacaoglu, I Necoara… - … on Machine Learning, 2019 - proceedings.mlr.press
We propose a stochastic gradient framework for solving stochastic composite convex
optimization problems with (possibly) infinite number of linear inclusion constraints that need …

Stochastic subgradient for composite convex optimization with functional constraints

I Necoara, NK Singh - Journal of Machine Learning Research, 2022 - jmlr.org
In this paper we consider optimization problems with stochastic composite objective function
subject to (possibly) infinite intersection of constraints. The objective function is expressed in …

Snake: a stochastic proximal gradient algorithm for regularized problems over large graphs

A Salim, P Bianchi, W Hachem - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
A regularized optimization problem over a large unstructured graph is studied, where the
regularization term is tied to the graph geometry. Typical regularization examples include …

Hamiltonian descent for composite objectives

B O'Donoghue, CJ Maddison - Advances in Neural …, 2019 - proceedings.neurips.cc
In optimization the duality gap between the primal and the dual problems is a measure of the
suboptimality of any primal-dual point. In classical mechanics the equations of motion of a …

A fully stochastic primal-dual algorithm

P Bianchi, W Hachem, A Salim - Optimization Letters, 2021 - Springer
A new stochastic primal-dual algorithm for solving a composite optimization problem is
proposed. It is assumed that all the functions/operators that enter the optimization problem …