Distributed gradient descent algorithm robust to an arbitrary number of byzantine attackers

X Cao, L Lai - IEEE Transactions on Signal Processing, 2019 - ieeexplore.ieee.org
Due to the growth of modern dataset size and the desire to harness computing power of
multiple machines, there is a recent surge of interest in the design of distributed machine …

Byzantine-resilient stochastic gradient descent for distributed learning: A lipschitz-inspired coordinate-wise median approach

H Yang, X Zhang, M Fang, J Liu - 2019 IEEE 58th Conference …, 2019 - ieeexplore.ieee.org
In this work, we consider the resilience of distributed algorithms based on stochastic
gradient descent (SGD) in distributed learning with potentially Byzantine attackers, who …

Federated variance-reduced stochastic gradient descent with robustness to byzantine attacks

Z Wu, Q Ling, T Chen… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
This paper deals with distributed finite-sum optimization for learning over multiple workers in
the presence of malicious Byzantine attacks. Most resilient approaches so far combine …

ByGARS: Byzantine SGD with arbitrary number of attackers

J Regatti, H Chen, A Gupta - arXiv preprint arXiv:2006.13421, 2020 - arxiv.org
We propose two novel stochastic gradient descent algorithms, ByGARS and ByGARS++, for
distributed machine learning in the presence of any number of Byzantine adversaries. In …

Distributed training with heterogeneous data: Bridging median-and mean-based algorithms

X Chen, T Chen, H Sun, SZ Wu… - Advances in Neural …, 2020 - proceedings.neurips.cc
Recently, there is a growing interest in the study of median-based algorithms for distributed
non-convex optimization. Two prominent examples include signSGD with majority vote, an …

Distributed statistical machine learning in adversarial settings: Byzantine gradient descent

Y Chen, L Su, J Xu - Proceedings of the ACM on Measurement and …, 2017 - dl.acm.org
We consider the distributed statistical learning problem over decentralized systems that are
prone to adversarial attacks. This setup arises in many practical applications, including …

Distributed statistical machine learning in adversarial settings: Byzantine gradient descent

Y Chen, L Su, J Xu - Abstracts of the 2018 ACM International …, 2018 - dl.acm.org
We consider the distributed statistical learning problem over decentralized systems that are
prone to adversarial attacks. This setup arises in many practical applications, including …

FABA: an algorithm for fast aggregation against byzantine attacks in distributed neural networks

Q Xia, Z Tao, Z Hao, Q Li - IJCAI, 2019 - par.nsf.gov
Many times, training a large scale deep learning neural network on a single machine
becomes more and more difficult for a complex network model. Distributed training provides …

Byzantine-robust distributed learning: Towards optimal statistical rates

D Yin, Y Chen, R Kannan… - … conference on machine …, 2018 - proceedings.mlr.press
In this paper, we develop distributed optimization algorithms that are provably robust against
Byzantine failures—arbitrary and potentially adversarial behavior, in distributed computing …

Defending against saddle point attack in Byzantine-robust distributed learning

D Yin, Y Chen, R Kannan… - … Conference on Machine …, 2019 - proceedings.mlr.press
We study robust distributed learning that involves minimizing a non-convex loss function
with saddle points. We consider the Byzantine setting where some worker machines have …