In this work, we consider the resilience of distributed algorithms based on stochastic gradient descent (SGD) in distributed learning with potentially Byzantine attackers, who …
Z Wu, Q Ling, T Chen… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
This paper deals with distributed finite-sum optimization for learning over multiple workers in the presence of malicious Byzantine attacks. Most resilient approaches so far combine …
We propose two novel stochastic gradient descent algorithms, ByGARS and ByGARS++, for distributed machine learning in the presence of any number of Byzantine adversaries. In …
Recently, there is a growing interest in the study of median-based algorithms for distributed non-convex optimization. Two prominent examples include signSGD with majority vote, an …
Y Chen, L Su, J Xu - Proceedings of the ACM on Measurement and …, 2017 - dl.acm.org
We consider the distributed statistical learning problem over decentralized systems that are prone to adversarial attacks. This setup arises in many practical applications, including …
Y Chen, L Su, J Xu - Abstracts of the 2018 ACM International …, 2018 - dl.acm.org
We consider the distributed statistical learning problem over decentralized systems that are prone to adversarial attacks. This setup arises in many practical applications, including …
Many times, training a large scale deep learning neural network on a single machine becomes more and more difficult for a complex network model. Distributed training provides …
D Yin, Y Chen, R Kannan… - … conference on machine …, 2018 - proceedings.mlr.press
In this paper, we develop distributed optimization algorithms that are provably robust against Byzantine failures—arbitrary and potentially adversarial behavior, in distributed computing …
D Yin, Y Chen, R Kannan… - … Conference on Machine …, 2019 - proceedings.mlr.press
We study robust distributed learning that involves minimizing a non-convex loss function with saddle points. We consider the Byzantine setting where some worker machines have …