Extragradient Type Methods for Riemannian Variational Inequality Problems

Z Hu, G Wang, X Wang, A Wibisono… - International …, 2024 - proceedings.mlr.press
In this work, we consider monotone Riemannian Variational Inequality Problems (RVIPs),
which encompass both Riemannian convex optimization and minimax optimization as …

Alternating mirror descent for constrained min-max games

A Wibisono, M Tao, G Piliouras - Advances in Neural …, 2022 - proceedings.neurips.cc
In this paper we study two-player bilinear zero-sum games with constrained strategy spaces.
An instance of natural occurrences of such constraints is when mixed strategies are used …

Convergence of kinetic langevin monte carlo on lie groups

L Kong, M Tao - arXiv preprint arXiv:2403.12012, 2024 - arxiv.org
Explicit, momentum-based dynamics for optimizing functions defined on Lie groups was
recently constructed, based on techniques such as variational optimization and left …

Momentum stiefel optimizer, with applications to suitably-orthogonal attention, and optimal transport

L Kong, Y Wang, M Tao - arXiv preprint arXiv:2205.14173, 2022 - arxiv.org
The problem of optimization on Stiefel manifold, ie, minimizing functions of (not necessarily
square) matrices that satisfy orthogonality constraints, has been extensively studied. Yet, a …

Quantum state generation with structure-preserving diffusion model

Y Zhu, T Chen, EA Theodorou, X Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
This article considers the generative modeling of the states of quantum systems, and an
approach based on denoising diffusion model is proposed. The key contribution is an …

A derivative-free optimization method with application to functions with exploding and vanishing gradients

S Al-Abri, TX Lin, M Tao, F Zhang - IEEE Control Systems …, 2020 - ieeexplore.ieee.org
In this letter, we propose a bio-inspired derivative-free optimization algorithm capable of
minimizing objective functions with vanishing or exploding gradients. The proposed method …

Stochasticity of deterministic gradient descent: Large learning rate for multiscale objective function

L Kong, M Tao - Advances in neural information processing …, 2020 - proceedings.neurips.cc
This article suggests that deterministic Gradient Descent, which does not use any stochastic
gradient approximation, can still exhibit stochastic behaviors. In particular, it shows that if the …

Adaptive Hamiltonian variational integrators and applications to symplectic accelerated optimization

V Duruisseaux, J Schmitt, M Leok - SIAM Journal on Scientific Computing, 2021 - SIAM
It is well known that symplectic integrators lose their near energy preservation properties
when variable time-steps are used. The most common approach to combining adaptive time …

Semimargingale driven mechanics and reduction by symmetry for stochastic and dissipative dynamical systems

OD Street, S Takao - arXiv preprint arXiv:2312.09769, 2023 - arxiv.org
The recent interest in structure preserving stochastic Lagrangian and Hamiltonian systems
raises questions regarding how such models are to be understood and the principles …

Practical perspectives on symplectic accelerated optimization

V Duruisseaux, M Leok - Optimization Methods and Software, 2023 - Taylor & Francis
Geometric numerical integration has recently been exploited to design symplectic
accelerated optimization algorithms by simulating the Bregman Lagrangian and Hamiltonian …