In the last few years, the theory of decentralized distributed convex optimization has made significant progress. The lower bounds on communications rounds and oracle calls have …
A Rogozin, M Bochko, P Dvurechensky… - 2021 60th IEEE …, 2021 - ieeexplore.ieee.org
We consider a distributed stochastic optimization problem that is solved by a decentralized network of agents with only local communication between neighboring agents. The goal of …
We introduce an inexact oracle model for variational inequalities with monotone operators, propose a numerical method that solves such variational inequalities, and analyze its …
We propose general non-accelerated [The results for non-accelerated methods first appeared in December 2020 in the preprint (A. Agafonov, D. Kamzolov, P. Dvurechensky …
This paper considers the problem of decentralized, personalized federated learning. For centralized personalized federated learning, a penalty that measures the deviation from the …
Many convex optimization problems have structured objective functions written as a sum of functions with different oracle types (eg, full gradient, coordinate derivative, stochastic …
Exploiting higher-order derivatives in convex optimization is known at least since 1970's. In each iteration higher-order (also called tensor) methods minimize a regularized Taylor …
In this paper we propose three $ p $-th order tensor methods for $\mu $-strongly-convex- strongly-concave saddle point problems (SPP). The first method is based on the assumption …
The article is devoted to the development of algorithmic methods ensuring efficient complexity bounds for strongly convex-concave saddle point problems in the case when one …