Communication-efficient distributed deep learning: A comprehensive survey

Z Tang, S Shi, W Wang, B Li, X Chu - arXiv preprint arXiv:2003.06307, 2020 - arxiv.org
Distributed deep learning (DL) has become prevalent in recent years to reduce training time
by leveraging multiple computing devices (eg, GPUs/TPUs) due to larger models and …

On biased compression for distributed learning

A Beznosikov, S Horváth, P Richtárik… - Journal of Machine …, 2023 - jmlr.org
In the last few years, various communication compression techniques have emerged as an
indispensable tool helping to alleviate the communication bottleneck in distributed learning …

Lower bounds and nearly optimal algorithms in distributed learning with communication compression

X Huang, Y Chen, W Yin… - Advances in Neural …, 2022 - proceedings.neurips.cc
Recent advances in distributed optimization and learning have shown that communication
compression is one of the most effective means of reducing communication. While there …

FedQClip: Accelerating Federated Learning via Quantized Clipped SGD

Z Qu, N Jia, B Ye, S Hu, S Guo - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Federated Learning (FL) has emerged as a promising technique for collaboratively training
machine learning models among multiple participants while preserving privacy-sensitive …

Better Methods and Theory for Federated Learning: Compression, Client Selection and Heterogeneity

S Horváth - arXiv preprint arXiv:2207.00392, 2022 - arxiv.org
Federated learning (FL) is an emerging machine learning paradigm involving multiple
clients, eg, mobile phone devices, with an incentive to collaborate in solving a machine …

[图书][B] Edge Learning for Distributed Big Data Analytics: Theory, Algorithms, and System Design

S Guo, Z Qu - 2022 - books.google.com
"" Traditionally, to develop these intelligent services and applications, big data are stored
and processed in a centralized model. However, with the proliferation of edge devices and …

以融合全局動量之梯度壓縮方法提高聯邦學習通信效率

CC Kuo - 2022 - search.proquest.com
在聯邦學習(FL) 中加入更多參與訓練的客戶可以使整個模型學習到更多資料用以改進機器學習
模型. 然而, 傳統聯邦學習設計需要消耗大量的通訊流量, 需要在通訊流量與客戶數量之間取的 …

Improving Federated Learning Communication Efficiency with Global Momentum Fusion for Gradient Compression Schemes

CC Kuo, TT Kuo, CY Lin - arXiv preprint arXiv:2211.09320, 2022 - arxiv.org
Communication costs within Federated learning hinder the system scalability for reaching
more data from more clients. The proposed FL adopts a hub-and-spoke network topology …

Communication-efficient federated learning via quantized clipped sgd

N Jia, Z Qu, B Ye - Wireless Algorithms, Systems, and Applications: 16th …, 2021 - Springer
Communication has been considered as a major bottleneck of Federated Learning (FL) in
mobile edge networks since participating workers iteratively transmit gradients to and …