Asynchronous decentralized learning over unreliable wireless networks

E Jeong, M Zecchin… - ICC 2022-IEEE …, 2022 - ieeexplore.ieee.org
Decentralized learning enables edge users to collaboratively train models by exchanging
information via device-to-device communication, yet prior works have been limited to …

Network-density-controlled decentralized parallel stochastic gradient descent in wireless systems

K Sato, Y Satoh, D Sugimura - ICC 2020-2020 IEEE …, 2020 - ieeexplore.ieee.org
This paper proposes a communication strategy for decentralized learning on wireless
systems. Our discussion is based on the decentralized parallel stochastic gradient descent …

Decentralized Learning over Wireless Networks: The Effect of Broadcast with Random Access

Z Chen, M Dahl, EG Larsson - 2023 IEEE 24th International …, 2023 - ieeexplore.ieee.org
In this work, we focus on the communication aspect of decentralized learning, which
involves multiple agents training a shared machine learning model using decentralized …

Exploring the error-runtime trade-off in decentralized optimization

J Wang, AK Sahu, G Joshi, S Kar - 2020 54th Asilomar …, 2020 - ieeexplore.ieee.org
Decentralized stochastic gradient descent (SGD) has recently become one of the most
promising methods to use data parallelism in order to train a machine learning model on a …

Decentralized edge learning via unreliable device-to-device communications

Z Jiang, G Yu, Y Cai, Y Jiang - IEEE Transactions on Wireless …, 2022 - ieeexplore.ieee.org
Distributed machine learning has been extensively employed in wireless systems, which
can leverage abundant data distributed over massive devices to collaboratively train a high …

Decentralized federated learning with unreliable communications

H Ye, L Liang, GY Li - IEEE journal of selected topics in signal …, 2022 - ieeexplore.ieee.org
Decentralized federated learning, inherited from decentralized learning, enables the edge
devices to collaborate on model training in a peer-to-peer manner without the assistance of …

Robust decentralized stochastic gradient descent over unstable networks

Y Zheng, L Zhang, S Chen, X Zhang, Z Cai… - Computer …, 2023 - Elsevier
Decentralized learning is essential for large-scale deep learning due to its great advantage
in breaking the communication bottleneck. Most decentralized learning algorithms focus on …

Communication-efficient distributionally robust decentralized learning

M Zecchin, M Kountouris, D Gesbert - arXiv preprint arXiv:2205.15614, 2022 - arxiv.org
Decentralized learning algorithms empower interconnected devices to share data and
computational resources to collaboratively train a machine learning model without the aid of …

Faster Convergence with Less Communication: Broadcast-Based Subgraph Sampling for Decentralized Learning over Wireless Networks

DP Herrera, Z Chen, EG Larsson - arXiv preprint arXiv:2401.13779, 2024 - arxiv.org
Consensus-based decentralized stochastic gradient descent (D-SGD) is a widely adopted
algorithm for decentralized training of machine learning models across networked agents. A …

Adjacent Leader Decentralized Stochastic Gradient Descent

H He, J Wang, A Choromanska - arXiv preprint arXiv:2405.11389, 2024 - arxiv.org
This work focuses on the decentralized deep learning optimization framework. We propose
Adjacent Leader Decentralized Gradient Descent (AL-DSGD), for improving final model …