Advances and open problems in federated learning

P Kairouz, HB McMahan, B Avent… - … and trends® in …, 2021 - nowpublishers.com
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …

Communication compression techniques in distributed deep learning: A survey

Z Wang, M Wen, Y Xu, Y Zhou, JH Wang… - Journal of Systems …, 2023 - Elsevier
Nowadays, the training data and neural network models are getting increasingly large. The
training time of deep learning will become unbearably long on a single machine. To reduce …

Timely communication in federated learning

B Buyukates, S Ulukus - IEEE INFOCOM 2021-IEEE …, 2021 - ieeexplore.ieee.org
We consider a federated learning framework in which a parameter server (PS) trains a
global model by using n clients without actually storing the client data centrally at a cloud …

12 Collaborative Learning over Wireless Networks: An Introductory Overview

E Ozfatura, D Gündüz, HV Poor - Machine Learning and Wireless …, 2022 - cambridge.org
The number of devices connected to the Internet has already surpassed 1 billion. With the
increasing proliferation of mobile devices, the amount of data collected and transmitted over …

Sparse random networks for communication-efficient federated learning

B Isik, F Pase, D Gunduz, T Weissman… - arXiv preprint arXiv …, 2022 - arxiv.org
One main challenge in federated learning is the large communication cost of exchanging
weight updates from clients to the server at each round. While prior work has made great …

Time-correlated sparsification for communication-efficient federated learning

E Ozfatura, K Ozfatura, D Gündüz - 2021 IEEE International …, 2021 - ieeexplore.ieee.org
Federated learning (FL) enables multiple clients to collaboratively train a shared model, with
the help of a parameter server (PS), without disclosing their local datasets. However, due to …

EF-BV: A unified theory of error feedback and variance reduction mechanisms for biased and unbiased compression in distributed optimization

L Condat, K Yi, P Richtárik - Advances in Neural …, 2022 - proceedings.neurips.cc
In distributed or federated optimization and learning, communication between the different
computing units is often the bottleneck and gradient compression is widely used to reduce …

Exact Optimality of Communication-Privacy-Utility Tradeoffs in Distributed Mean Estimation

B Isik, WN Chen, A Ozgur… - Advances in Neural …, 2024 - proceedings.neurips.cc
We study the mean estimation problem under communication and local differential privacy
constraints. While previous work has proposed order-optimal algorithms for the same …

Deepreduce: A sparse-tensor communication framework for federated deep learning

H Xu, K Kostopoulou, A Dutta, X Li… - Advances in …, 2021 - proceedings.neurips.cc
Sparse tensors appear frequently in federated deep learning, either as a direct artifact of the
deep neural network's gradients, or as a result of an explicit sparsification process. Existing …

Dragonn: Distributed randomized approximate gradients of neural networks

Z Wang, Z Xu, X Wu, A Shrivastava… - … on Machine Learning, 2022 - proceedings.mlr.press
Data-parallel distributed training (DDT) has become the de-facto standard for accelerating
the training of most deep learning tasks on massively parallel hardware. In the DDT …