Distributed derivative-free learning method for stochastic optimization over a network with sparse activity

W Li, M Assaad, S Zheng - IEEE Transactions on Automatic …, 2021 - ieeexplore.ieee.org
This article addresses a distributed optimization problem in a communication network where
nodes are active sporadically. Each active node applies some learning method to control its …

Distributed derivative-free optimization in large communication networks with sparse activity

W Li, M Assaad - 2018 IEEE Conference on Decision and …, 2018 - ieeexplore.ieee.org
This paper addresses a distributed optimization problem in a large communication network,
where nodes are active sporadically. Each active node should properly control its action to …

Local exact-diffusion for decentralized optimization and learning

SA Alghunaim - IEEE Transactions on Automatic Control, 2024 - ieeexplore.ieee.org
Distributed optimization methods with local updates have recently attracted a lot of attention
due to their potential to reduce the communication cost of distributed methods. In these …

A sharp estimate on the transient time of distributed stochastic gradient descent

S Pu, A Olshevsky… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
This article is concerned with minimizing the average of cost functions over a network, in
which agents may communicate and exchange information with each other. We consider the …

Improving the transient times for distributed stochastic gradient methods

K Huang, S Pu - IEEE Transactions on Automatic Control, 2022 - ieeexplore.ieee.org
We consider the distributed optimization problem where agents, each possessing a local
cost function, collaboratively minimize the average of the cost functions over a connected …

Distributed Adaptive Gradient Algorithm with Gradient Tracking for Stochastic Non-Convex Optimization

D Han, K Liu, Y Lin, Y Xia - IEEE Transactions on Automatic …, 2024 - ieeexplore.ieee.org
This paper considers a distributed stochastic non-convex optimization problem, where the
nodes in a network cooperatively minimize a sum of-smooth local cost functions with sparse …

Convergence in high probability of distributed stochastic gradient descent algorithms

K Lu, H Wang, H Zhang, L Wang - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
In this article, the problem of distributed optimization with nonconvex objective functions is
studied by employing a network of agents. Each agent only has access to a noisy estimate …

A communication-efficient linearly convergent algorithm with variance reduction for distributed stochastic optimization

J Lei, P Yi, J Chen, Y Hong - 2020 European Control …, 2020 - ieeexplore.ieee.org
This paper considers a distributed stochastic strongly convex optimization, where agents
over a network aim to cooperatively minimize the average of all agents' local cost functions …

Convergence rates of distributed gradient methods under random quantization: A stochastic approximation approach

TT Doan, ST Maguluri… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
We study gradient methods for solving distributed convex optimization problems over a
network when the communication bandwidth between the nodes is limited, and so …

Asynchronous stochastic gradient descent over decentralized datasets

Y Du, K You - IEEE Transactions on Control of Network Systems, 2021 - ieeexplore.ieee.org
The computational efficiency of the asynchronous stochastic gradient descent (ASGD)
against its synchronous version has been well documented in recent works. Unfortunately, it …