Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning

H Rafique, M Liu, Q Lin, T Yang - Optimization Methods and …, 2022 - Taylor & Francis
Min–max problems have broad applications in machine learning, including learning with
non-decomposable loss and learning with robustness to data distribution. Convex–concave …

Variance reduction for matrix games

Y Carmon, Y Jin, A Sidford… - Advances in Neural …, 2019 - proceedings.neurips.cc
We present a randomized primal-dual algorithm that solves the problem minx maxy y^ TA x
to additive error epsilon in time nnz (A)+ sqrt {nnz (A) n}/epsilon, for matrix A with larger …

Fast stochastic bregman gradient methods: Sharp analysis and variance reduction

RA Dragomir, M Even… - … Conference on Machine …, 2021 - proceedings.mlr.press
We study the problem of minimizing a relatively-smooth convex function using stochastic
Bregman gradient methods. We first prove the convergence of Bregman Stochastic Gradient …

Fast distributionally robust learning with variance-reduced min-max optimization

Y Yu, T Lin, EV Mazumdar… - … Conference on Artificial …, 2022 - proceedings.mlr.press
Distributionally robust supervised learning (DRSL) is emerging as a key paradigm for
building reliable machine learning systems for real-world applications–reflecting the need …

Sublinear classical and quantum algorithms for general matrix games

T Li, C Wang, S Chakrabarti, X Wu - … of the AAAI Conference on Artificial …, 2021 - ojs.aaai.org
We investigate sublinear classical and quantum algorithms for matrix games, a fundamental
problem in optimization and machine learning, with provable guarantees. Given a matrix …

Level-set methods for finite-sum constrained convex optimization

Q Lin, R Ma, T Yang - International conference on machine …, 2018 - proceedings.mlr.press
We consider the constrained optimization where the objective function and the constraints
are defined as summation of finitely many loss functions. This model has applications in …

A stochastic primal-dual splitting algorithm with variance reduction for composite optimization problems

VD Nguyen, B Công Vũ, D Papadimitriou - Applicable Analysis, 2024 - Taylor & Francis
This paper revisits the generic structured primal-dual problem involving the infimal
convolution in real Hilbert spaces. For this purpose, we develop a stochastic primal-dual …

A stochastic Bregman golden ratio algorithm for non-Lipschitz stochastic mixed variational inequalities with application to resource share problems

XJ Long, J Yang - Journal of Computational and Applied Mathematics, 2025 - Elsevier
In the study of stochastic mixed variational inequalities (SMVIs), Lipschitz is an
indispensable assumption for the convergence analysis. However, practical applications …

Ap-perf: Incorporating generic performance metrics in differentiable learning

R Fathony, Z Kolter - International Conference on Artificial …, 2020 - proceedings.mlr.press
We propose a method that enables practitioners to conveniently incorporate custom non-
decomposable performance metrics into differentiable learning pipelines, notably those …

Bregman gradient methods for relatively-smooth optimization

RA Dragomir - 2021 - inria.hal.science
En apprentissage statistique et traitement du signal, de nombreuses tâches se formulent
sous la forme d'un problème d'optimisation de grande taille. Dans ce contexte, les méthodes …