Wireless federated learning with hybrid local and centralized training: A latency minimization design

N Huang, M Dai, Y Wu, TQS Quek… - IEEE Journal of Selected …, 2022 - ieeexplore.ieee.org
N Huang, M Dai, Y Wu, TQS Quek, X Shen
IEEE Journal of Selected Topics in Signal Processing, 2022ieeexplore.ieee.org
Wireless federated learning (FL) is a collaborative machine learning (ML) framework in
which wireless client-devices independently train their ML models and send the locally
trained models to the FL server for aggregation. In this paper, we consider the coexistence of
privacy-sensitive client-devices and privacy-insensitive yet computing-resource constrained
client-devices, and propose an FL framework with a hybrid centralized training and local
training. Specifically, the privacy-sensitive client-devices perform local ML model training …
Wireless federated learning (FL) is a collaborative machine learning (ML) framework in which wireless client-devices independently train their ML models and send the locally trained models to the FL server for aggregation. In this paper, we consider the coexistence of privacy-sensitive client-devices and privacy-insensitive yet computing-resource constrained client-devices, and propose an FL framework with a hybrid centralized training and local training. Specifically, the privacy-sensitive client-devices perform local ML model training and send their local models to the FL server. Each privacy-insensitive client-device can have two options, i.e., (i) conducting a local training and then sending its local model to the FL server, and (ii) directly sending its local data to the FL server for the centralized training. The FL server, after collecting the data from the privacy-insensitive client-devices (which choose to upload the local data), conducts a centralized training with the received datasets. The global model is then generated by aggregating (i) the local models uploaded by the client-devices and (ii) the model trained by the FL server centrally. Focusing on this hybrid FL framework, we firstly analyze its convergence feature with respect to the client-devices' selections of local training or centralized training. We then formulate a joint optimization of client-devices' selections of the local training or centralized training, the FL training configuration (i.e., the number of the local iterations and the number of the global iterations), and the bandwidth allocations to the client-devices, with the objective of minimizing the overall latency for reaching the FL convergence. Despite the non-convexity of the joint optimization problem, we identify its layered structure and propose an efficient algorithm to solve it. Numerical results demonstrate the advantage of our proposed FL framework with the hybrid local and centralized training as well as our proposed algorithm, in comparison with several benchmark FL schemes and algorithms.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果