Distributed artificial intelligence empowered by end-edge-cloud computing: A survey

S Duan, D Wang, J Ren, F Lyu, Y Zhang… - … Surveys & Tutorials, 2022 - ieeexplore.ieee.org
As the computing paradigm shifts from cloud computing to end-edge-cloud computing, it
also supports artificial intelligence evolving from a centralized manner to a distributed one …

A survey on federated learning for resource-constrained IoT devices

A Imteaj, U Thakker, S Wang, J Li… - IEEE Internet of Things …, 2021 - ieeexplore.ieee.org
Federated learning (FL) is a distributed machine learning strategy that generates a global
model by learning from multiple decentralized edge clients. FL enables on-device training …

Advances and open problems in federated learning

P Kairouz, HB McMahan, B Avent… - … and trends® in …, 2021 - nowpublishers.com
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …

From distributed machine learning to federated learning: A survey

J Liu, J Huang, Y Zhou, X Li, S Ji, H Xiong… - … and Information Systems, 2022 - Springer
In recent years, data and computing resources are typically distributed in the devices of end
users, various regions or organizations. Because of laws or regulations, the distributed data …

A unified theory of decentralized sgd with changing topology and local updates

A Koloskova, N Loizou, S Boreiri… - International …, 2020 - proceedings.mlr.press
Decentralized stochastic optimization methods have gained a lot of attention recently, mainly
because of their cheap per iteration cost, data locality, and their communication-efficiency. In …

Decentralized stochastic optimization and gossip algorithms with compressed communication

A Koloskova, S Stich, M Jaggi - International Conference on …, 2019 - proceedings.mlr.press
We consider decentralized stochastic optimization with the objective function (eg data
samples for machine learning tasks) being distributed over n machines that can only …

Sharper convergence guarantees for asynchronous SGD for distributed and federated learning

A Koloskova, SU Stich, M Jaggi - Advances in Neural …, 2022 - proceedings.neurips.cc
We study the asynchronous stochastic gradient descent algorithm, for distributed training
over $ n $ workers that might be heterogeneous. In this algorithm, workers compute …

SGD: General analysis and improved rates

RM Gower, N Loizou, X Qian… - International …, 2019 - proceedings.mlr.press
We propose a general yet simple theorem describing the convergence of SGD under the
arbitrary sampling paradigm. Our theorem describes the convergence of an infinite array of …

On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization

H Yu, R Jin, S Yang - International Conference on Machine …, 2019 - proceedings.mlr.press
Recent developments on large-scale distributed machine learning applications, eg, deep
neural networks, benefit enormously from the advances in distributed non-convex …

An improved analysis of gradient tracking for decentralized machine learning

A Koloskova, T Lin, SU Stich - Advances in Neural …, 2021 - proceedings.neurips.cc
We consider decentralized machine learning over a network where the training data is
distributed across $ n $ agents, each of which can compute stochastic model updates on …