Cooperative training methods for distributed machine learning are typically based on the exchange of local gradients or local model parameters. The latter approach is known as …
Traditional deep learning models are trained on centralized servers using labeled sample data collected from edge devices. This data often includes private information, which the …
In this work, we study the problem of federated learning (FL), where distributed users aim to jointly train a machine learning model with the help of a parameter server (PS). In each …
Communication constraints are one of the major challenges preventing the wide-spread adoption of Federated Learning systems. Recently, Federated Distillation (FD), a new …
W Liu, L Chen, W Zhang - IEEE Transactions on Signal and …, 2022 - ieeexplore.ieee.org
Decentralized stochastic gradient descent (SGD) is a driving engine for decentralized federated learning (DFL). The performance of decentralized SGD is jointly influenced by …
In federated learning, all networked clients contribute to the model training cooperatively. However, with model sizes increasing, even sharing the trained partial models often leads to …
H Xing, O Simeone, S Bi - IEEE Journal on Selected Areas in …, 2021 - ieeexplore.ieee.org
The proliferation of Internet-of-Things (IoT) devices and cloud-computing applications over siloed data centers is motivating renewed interest in the collaborative training of a shared …
F Sattler, A Marban, R Rischke… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Communication constraints are one of the majorchallenges preventing the wide-spread adoption of Federated Learning systems. Recently, Federated Distillation (FD), a new …
The dramatic success of deep learning is largely due to the availability of data. Data samples are often acquired on edge devices, such as smartphones, vehicles, and sensors …