Distributed computing has become a common approach for large-scale computation tasks due to benefits such as high reliability, scalability, computation speed, and cost …
Federated learning is a key scenario in modern large-scale machine learning where the data remains distributed over a large number of clients and the task is to learn a centralized …
Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging …
Federated Learning is a distributed learning paradigm with two key challenges that differentiate it from traditional distributed optimization:(1) significant variability in terms of the …
Y Chen, Y Ning, M Slawski… - 2020 IEEE International …, 2020 - ieeexplore.ieee.org
Federated learning (FL) is a machine learning paradigm where a shared central model is learned across distributed devices while the training data remains on these devices …
Huge scale machine learning problems are nowadays tackled by distributed optimization algorithms, ie algorithms that leverage the compute power of many devices for training. The …
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning …
SU Stich - arXiv preprint arXiv:1805.09767, 2018 - arxiv.org
Mini-batch stochastic gradient descent (SGD) is state of the art in large scale distributed training. The scheme can reach a linear speedup with respect to the number of workers, but …
D Yin, Y Chen, R Kannan… - … conference on machine …, 2018 - proceedings.mlr.press
In this paper, we develop distributed optimization algorithms that are provably robust against Byzantine failures—arbitrary and potentially adversarial behavior, in distributed computing …