Communication-efficient edge AI: Algorithms and systems

Y Shi, K Yang, T Jiang, J Zhang… - … Surveys & Tutorials, 2020 - ieeexplore.ieee.org
Artificial intelligence (AI) has achieved remarkable breakthroughs in a wide range of fields,
ranging from speech processing, image classification to drug discovery. This is driven by the …

A comprehensive survey on coded distributed computing: Fundamentals, challenges, and networking applications

JS Ng, WYB Lim, NC Luong, Z Xiong… - … Surveys & Tutorials, 2021 - ieeexplore.ieee.org
Distributed computing has become a common approach for large-scale computation tasks
due to benefits such as high reliability, scalability, computation speed, and cost …

Scaffold: Stochastic controlled averaging for federated learning

SP Karimireddy, S Kale, M Mohri… - International …, 2020 - proceedings.mlr.press
Federated learning is a key scenario in modern large-scale machine learning where the
data remains distributed over a large number of clients and the task is to learn a centralized …

On the convergence of fedavg on non-iid data

X Li, K Huang, W Yang, S Wang, Z Zhang - arXiv preprint arXiv …, 2019 - arxiv.org
Federated learning enables a large amount of edge computing devices to jointly learn a
model without data sharing. As a leading algorithm in this setting, Federated Averaging …

Federated optimization in heterogeneous networks

T Li, AK Sahu, M Zaheer, M Sanjabi… - … of Machine learning …, 2020 - proceedings.mlsys.org
Federated Learning is a distributed learning paradigm with two key challenges that
differentiate it from traditional distributed optimization:(1) significant variability in terms of the …

Asynchronous online federated learning for edge devices with non-iid data

Y Chen, Y Ning, M Slawski… - 2020 IEEE International …, 2020 - ieeexplore.ieee.org
Federated learning (FL) is a machine learning paradigm where a shared central model is
learned across distributed devices while the training data remains on these devices …

Sparsified SGD with memory

SU Stich, JB Cordonnier… - Advances in neural …, 2018 - proceedings.neurips.cc
Huge scale machine learning problems are nowadays tackled by distributed optimization
algorithms, ie algorithms that leverage the compute power of many devices for training. The …

Adaptive federated learning in resource constrained edge computing systems

S Wang, T Tuor, T Salonidis, KK Leung… - IEEE journal on …, 2019 - ieeexplore.ieee.org
Emerging technologies and applications including Internet of Things, social networking, and
crowd-sourcing generate large amounts of data at the network edge. Machine learning …

Local SGD converges fast and communicates little

SU Stich - arXiv preprint arXiv:1805.09767, 2018 - arxiv.org
Mini-batch stochastic gradient descent (SGD) is state of the art in large scale distributed
training. The scheme can reach a linear speedup with respect to the number of workers, but …

Byzantine-robust distributed learning: Towards optimal statistical rates

D Yin, Y Chen, R Kannan… - … conference on machine …, 2018 - proceedings.mlr.press
In this paper, we develop distributed optimization algorithms that are provably robust against
Byzantine failures—arbitrary and potentially adversarial behavior, in distributed computing …