Distributed learning in wireless networks: Recent progress and future challenges

M Chen, D Gündüz, K Huang, W Saad… - IEEE Journal on …, 2021 - ieeexplore.ieee.org
The next-generation of wireless networks will enable many machine learning (ML) tools and
applications to efficiently analyze various types of data collected by edge devices for …

Federated learning in cloud-edge collaborative architecture: key technologies, applications and challenges

G Bao, P Guo - Journal of Cloud Computing, 2022 - Springer
In recent years, with the rapid growth of edge data, the novel cloud-edge collaborative
architecture has been proposed to compensate for the lack of data processing power of …

Communication-efficient device scheduling for federated learning using stochastic optimization

J Perazzone, S Wang, M Ji… - IEEE INFOCOM 2022 …, 2022 - ieeexplore.ieee.org
Federated learning (FL) is a useful tool in distributed machine learning that utilizes users'
local datasets in a privacy-preserving manner. When deploying FL in a constrained wireless …

What do we mean by generalization in federated learning?

H Yuan, W Morningstar, L Ning, K Singhal - arXiv preprint arXiv …, 2021 - arxiv.org
Federated learning data is drawn from a distribution of distributions: clients are drawn from a
meta-distribution, and their data are drawn from local data distributions. Thus generalization …

Rethinking gradient sparsification as total error minimization

A Sahu, A Dutta, AM Abdelmoniem… - Advances in …, 2021 - proceedings.neurips.cc
Gradient compression is a widely-established remedy to tackle the communication
bottleneck in distributed training of large deep neural networks (DNNs). Under the error …

Federated learning with spiking neural networks

Y Venkatesha, Y Kim, L Tassiulas… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
As neural networks get widespread adoption in resource-constrained embedded devices,
there is a growing need for low-power neural systems. Spiking Neural Networks (SNNs) are …

RandProx: Primal-dual optimization algorithms with randomized proximal updates

L Condat, P Richtárik - arXiv preprint arXiv:2207.12891, 2022 - arxiv.org
Proximal splitting algorithms are well suited to solving large-scale nonsmooth optimization
problems, in particular those arising in machine learning. We propose a new primal-dual …

Limitations and future aspects of communication costs in federated learning: A survey

M Asad, S Shaukat, D Hu, Z Wang, E Javanmardi… - Sensors, 2023 - mdpi.com
This paper explores the potential for communication-efficient federated learning (FL) in
modern distributed systems. FL is an emerging distributed machine learning technique that …

Layer-wise adaptive model aggregation for scalable federated learning

S Lee, T Zhang, AS Avestimehr - … of the AAAI Conference on Artificial …, 2023 - ojs.aaai.org
Abstract In Federated Learning (FL), a common approach for aggregating local solutions
across clients is periodic full model averaging. It is, however, known that different layers of …

Sparsified secure aggregation for privacy-preserving federated learning

I Ergun, HU Sami, B Guler - arXiv preprint arXiv:2112.12872, 2021 - arxiv.org
Secure aggregation is a popular protocol in privacy-preserving federated learning, which
allows model aggregation without revealing the individual models in the clear. On the other …