Z Wang, M Wen, Y Xu, Y Zhou, JH Wang… - Journal of Systems …, 2023 - Elsevier
Nowadays, the training data and neural network models are getting increasingly large. The training time of deep learning will become unbearably long on a single machine. To reduce …
B Buyukates, S Ulukus - IEEE INFOCOM 2021-IEEE …, 2021 - ieeexplore.ieee.org
We consider a federated learning framework in which a parameter server (PS) trains a global model by using n clients without actually storing the client data centrally at a cloud …
The number of devices connected to the Internet has already surpassed 1 billion. With the increasing proliferation of mobile devices, the amount of data collected and transmitted over …
One main challenge in federated learning is the large communication cost of exchanging weight updates from clients to the server at each round. While prior work has made great …
Federated learning (FL) enables multiple clients to collaboratively train a shared model, with the help of a parameter server (PS), without disclosing their local datasets. However, due to …
In distributed or federated optimization and learning, communication between the different computing units is often the bottleneck and gradient compression is widely used to reduce …
We study the mean estimation problem under communication and local differential privacy constraints. While previous work has proposed order-optimal algorithms for the same …
Sparse tensors appear frequently in federated deep learning, either as a direct artifact of the deep neural network's gradients, or as a result of an explicit sparsification process. Existing …
Z Wang, Z Xu, X Wu, A Shrivastava… - … on Machine Learning, 2022 - proceedings.mlr.press
Data-parallel distributed training (DDT) has become the de-facto standard for accelerating the training of most deep learning tasks on massively parallel hardware. In the DDT …