Sign-Based Gradient Descent With Heterogeneous Data: Convergence and Byzantine Resilience

R Jin, Y Liu, Y Huang, X He, T Wu… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Communication overhead has become one of the major bottlenecks in the distributed
training of modern deep neural networks. With such consideration, various quantization …

Distributed learning based on 1-bit gradient coding in the presence of stragglers

C Li, M Skoglund - IEEE Transactions on Communications, 2024 - ieeexplore.ieee.org
This paper considers the problem of distributed learning (DL) in the presence of stragglers.
For this problem, DL methods based on gradient coding have been widely investigated …

Byzantine-resilient Federated Learning Employing Normalized Gradients on Non-IID Datasets

S Zuo, X Yan, R Fan, L Shen, P Zhao, J Xu… - arXiv preprint arXiv …, 2024 - arxiv.org
In practical federated learning (FL) systems, the presence of malicious Byzantine attacks
and data heterogeneity often introduces biases into the learning process. However, existing …

A survey on secure decentralized optimization and learning

C Liu, N Bastianello, W Huo, Y Shi… - arXiv preprint arXiv …, 2024 - arxiv.org
Decentralized optimization has become a standard paradigm for solving large-scale
decision-making problems and training large machine learning models without centralizing …

C-RSA: Byzantine-robust and communication-efficient distributed learning in the non-convex and non-IID regime

X He, H Zhu, Q Ling - Signal Processing, 2023 - Elsevier
The emerging federated learning applications raise challenges of Byzantine-robustness and
communication efficiency in distributed non-convex learning over non-IID data. To address …

Byzantine-resilient Federated Learning With Adaptivity to Data Heterogeneity

S Zuo, X Yan, R Fan, H Hu, H Shan… - arXiv preprint arXiv …, 2024 - arxiv.org
This paper deals with federated learning (FL) in the presence of malicious Byzantine attacks
and data heterogeneity. A novel Robust Average Gradient Algorithm (RAGA) is proposed …

Byzantine-Robust and Communication-Efficient Distributed Learning via Compressed Momentum Filtering

C Liu, Y Li, Y Yi, KH Johansson - arXiv preprint arXiv:2409.08640, 2024 - arxiv.org
Distributed learning has become the standard approach for training large-scale machine
learning models across private data silos. While distributed learning enhances privacy …

Byzantine-Robust Compressed and Momentum-based Variance Reduction in Federated Learning

S Mao, J Zhang, X Hu, X Zheng - 2024 27th International …, 2024 - ieeexplore.ieee.org
Federated learning involves a group of workers and a central server to train a machine
learning model in a distributed manner. However, the distributed structure poses challenges …