An event-triggering algorithm for decentralized stochastic optimization over networks

Y Li, Y Chen, Q Lü, S Deng, H Li - Journal of the Franklin Institute, 2023 - Elsevier
In this paper, we study the problem of decentralized optimization to minimize a finite sum of
local convex cost functions over an undirected network. Compared with the existing works …

S-DIGing: A stochastic gradient tracking algorithm for distributed optimization

H Li, L Zheng, Z Wang, Y Yan, L Feng… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
In this article, we study convex optimization problems where agents of a network
cooperatively minimize the global objective function which consists of multiple local …

Achieving Linear Speedup with Network-Independent Learning Rates in Decentralized Stochastic Optimization

H Yuan, SA Alghunaim, K Yuan - 2023 62nd IEEE Conference …, 2023 - ieeexplore.ieee.org
Decentralized stochastic optimization has become a crucial tool for addressing large-scale
machine learning and control problems. In decentralized algorithms, all computing nodes …

Gradient-push algorithm for distributed optimization with event-triggered communications

J Kim, W Choi - IEEE Access, 2022 - ieeexplore.ieee.org
Decentralized optimization problems consist of multiple agents connected by a network. The
agents have each local cost function, and the goal is to minimize the sum of the functions …

A sharp estimate on the transient time of distributed stochastic gradient descent

S Pu, A Olshevsky… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
This article is concerned with minimizing the average of cost functions over a network, in
which agents may communicate and exchange information with each other. We consider the …

Problem-Parameter-Free Decentralized Nonconvex Stochastic Optimization

J Li, X Chen, S Ma, M Hong - arXiv preprint arXiv:2402.08821, 2024 - arxiv.org
Existing decentralized algorithms usually require knowledge of problem parameters for
updating local iterates. For example, the hyperparameters (such as learning rate) usually …

Removing data heterogeneity influence enhances network topology dependence of decentralized sgd

K Yuan, SA Alghunaim, X Huang - Journal of Machine Learning Research, 2023 - jmlr.org
We consider decentralized stochastic optimization problems, where a network of n nodes
cooperates to find a minimizer of the globally-averaged cost. A widely studied decentralized …

S-ADDOPT: Decentralized stochastic first-order optimization over directed graphs

MI Qureshi, R Xin, S Kar… - IEEE Control Systems …, 2020 - ieeexplore.ieee.org
In this letter, we study decentralized stochastic optimization to minimize a sum of smooth and
strongly convex cost functions when the functions are distributed over a directed network of …

Improving the transient times for distributed stochastic gradient methods

K Huang, S Pu - IEEE Transactions on Automatic Control, 2022 - ieeexplore.ieee.org
We consider the distributed optimization problem where agents, each possessing a local
cost function, collaboratively minimize the average of the cost functions over a connected …

Asynchronous decentralized accelerated stochastic gradient descent

G Lan, Y Zhou - IEEE Journal on Selected Areas in Information …, 2021 - ieeexplore.ieee.org
In this paper, we introduce an asynchronous decentralized accelerated stochastic gradient
descent type of algorithm for decentralized stochastic optimization. Considering …