A survey of distributed optimization

T Yang, X Yi, J Wu, Y Yuan, D Wu, Z Meng… - Annual Reviews in …, 2019 - Elsevier
In distributed optimization of multi-agent systems, agents cooperate to minimize a global
function which is a sum of local objective functions. Motivated by applications including …

Communication-efficient distributed learning: An overview

X Cao, T Başar, S Diggavi, YC Eldar… - IEEE journal on …, 2023 - ieeexplore.ieee.org
Distributed learning is envisioned as the bedrock of next-generation intelligent networks,
where intelligent agents, such as mobile devices, robots, and sensors, exchange information …

Cooperative SGD: A unified framework for the design and analysis of local-update SGD algorithms

J Wang, G Joshi - Journal of Machine Learning Research, 2021 - jmlr.org
When training machine learning models using stochastic gradient descent (SGD) with a
large number of nodes or massive edge devices, the communication cost of synchronizing …

Distributed stochastic gradient tracking methods

S Pu, A Nedić - Mathematical Programming, 2021 - Springer
In this paper, we study the problem of distributed multi-agent optimization over a network,
where each agent possesses a local cost function that is smooth and strongly convex. The …

Distributed gradient methods for convex machine learning problems in networks: Distributed optimization

A Nedic - IEEE Signal Processing Magazine, 2020 - ieeexplore.ieee.org
This article provides an overview of distributed gradient methods for solving convex machine
learning problems of the form minxRn (1/m) ΣR i= 1 fi (x) in a system consisting of mm …

Accelerated distributed Nesterov gradient descent

G Qu, N Li - IEEE Transactions on Automatic Control, 2019 - ieeexplore.ieee.org
This paper considers the distributed optimization problem over a network, where the
objective is to optimize a global function formed by a sum of local functions, using only local …

Relaysum for decentralized deep learning on heterogeneous data

T Vogels, L He, A Koloskova… - Advances in …, 2021 - proceedings.neurips.cc
In decentralized machine learning, workers compute model updates on their local data.
Because the workers only communicate with few neighbors without central coordination …

Cooperative fixed-time/finite-time distributed robust optimization of multi-agent systems

M Firouzbahrami, A Nobakhti - Automatica, 2022 - Elsevier
A new robust continuous-time optimization algorithm for distributed problems is presented
which guarantees fixed-time convergence. The algorithm is based on a Lyapunov function …

A general framework for decentralized optimization with first-order methods

R Xin, S Pu, A Nedić, UA Khan - Proceedings of the IEEE, 2020 - ieeexplore.ieee.org
Decentralized optimization to minimize a finite sum of functions, distributed over a network of
nodes, has been a significant area within control and signal-processing research due to its …

An improved convergence analysis for decentralized online stochastic non-convex optimization

R Xin, UA Khan, S Kar - IEEE Transactions on Signal …, 2021 - ieeexplore.ieee.org
In this paper, we study decentralized online stochastic non-convex optimization over a
network of nodes. Integrating a technique called gradient tracking in decentralized …