D Kovalev, A Gasnikov… - Advances in Neural …, 2022 - proceedings.neurips.cc
In this paper we study the convex-concave saddle-point problem $\min_x\max_y f (x)+ y^\top\mathbf {A} xg (y) $, where $ f (x) $ and $ g (y) $ are smooth and convex functions. We …
Y Jin, A Sidford, K Tian - Conference on Learning Theory, 2022 - proceedings.mlr.press
We design accelerated algorithms with improved rates for several fundamental classes of optimization problems. Our algorithms all build upon techniques related to the analysis of …
Variational inequalities are a formalism that includes games, minimization, saddle point, and equilibrium problems as special cases. Methods for variational inequalities are therefore …
While numerous effective decentralized algorithms have been proposed with theoretical guarantees and empirical successes, the performance limits in decentralized optimization …
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network whose links are …
This paper considers the decentralized convex optimization problem, which has a wide range of applications in large-scale machine learning, sensor networks, and control theory …
Proximal splitting algorithms are well suited to solving large-scale nonsmooth optimization problems, in particular those arising in machine learning. We propose a new primal-dual …
Z Song, L Shi, S Pu, M Yan - Mathematical Programming, 2024 - Springer
In this paper, we focus on solving the decentralized optimization problem of minimizing the sum of n objective functions over a multi-agent network. The agents are embedded in an …
A Sadiev, D Kovalev… - Advances in Neural …, 2022 - proceedings.neurips.cc
Inspired by a recent breakthrough of Mishchenko et al.[2022], who for the first time showed that local gradient steps can lead to provable communication acceleration, we propose an …