Distributed learning is envisioned as the bedrock of next-generation intelligent networks, where intelligent agents, such as mobile devices, robots, and sensors, exchange information …
J Wang, G Joshi - Journal of Machine Learning Research, 2021 - jmlr.org
When training machine learning models using stochastic gradient descent (SGD) with a large number of nodes or massive edge devices, the communication cost of synchronizing …
S Pu, A Nedić - Mathematical Programming, 2021 - Springer
In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The …
A Nedic - IEEE Signal Processing Magazine, 2020 - ieeexplore.ieee.org
This article provides an overview of distributed gradient methods for solving convex machine learning problems of the form minxRn (1/m) ΣR i= 1 fi (x) in a system consisting of mm …
G Qu, N Li - IEEE Transactions on Automatic Control, 2019 - ieeexplore.ieee.org
This paper considers the distributed optimization problem over a network, where the objective is to optimize a global function formed by a sum of local functions, using only local …
In decentralized machine learning, workers compute model updates on their local data. Because the workers only communicate with few neighbors without central coordination …
A new robust continuous-time optimization algorithm for distributed problems is presented which guarantees fixed-time convergence. The algorithm is based on a Lyapunov function …
Decentralized optimization to minimize a finite sum of functions, distributed over a network of nodes, has been a significant area within control and signal-processing research due to its …
R Xin, UA Khan, S Kar - IEEE Transactions on Signal …, 2021 - ieeexplore.ieee.org
In this paper, we study decentralized online stochastic non-convex optimization over a network of nodes. Integrating a technique called gradient tracking in decentralized …