FedKL: Tackling data heterogeneity in federated reinforcement learning by penalizing KL divergence

Z Xie, S Song - IEEE Journal on Selected Areas in …, 2023 - ieeexplore.ieee.org
IEEE Journal on Selected Areas in Communications, 2023ieeexplore.ieee.org
One of the fundamental issues for Federated Learning (FL) is data heterogeneity, which
causes accuracy degradation, slow convergence, and the communication bottleneck issue.
Although the impact of data heterogeneity on supervised FL has been widely studied, the
related investigation for Federated Reinforcement Learning (FRL) is still in its infancy. In this
paper, we first define the type and level of data heterogeneity for FRL systems. By inspecting
the connection between the global and local objective functions, we prove that local training …
One of the fundamental issues for Federated Learning (FL) is data heterogeneity, which causes accuracy degradation, slow convergence, and the communication bottleneck issue. Although the impact of data heterogeneity on supervised FL has been widely studied, the related investigation for Federated Reinforcement Learning (FRL) is still in its infancy. In this paper, we first define the type and level of data heterogeneity for FRL systems. By inspecting the connection between the global and local objective functions, we prove that local training can benefit the global objective, if the local update is properly penalized by the total variation (TV) distance between the local and global policies. A necessary condition for the global policy to be learn-able from the local environments is also derived, which is directly related to the heterogeneity level. Based on the theoretical result, a Kullback-Leibler (KL) divergence based penalty is proposed to directly constrain the model outputs in the distribution space and the convergence proof of the proposed algorithm is also provided. By jointly penalizing the divergence of the local policy from the global policy with a global penalty and penalizing each iteration of the local training with a local penalty, the proposed method achieves a better trade-off between training speed (step size) and convergence. Experiment results on two popular Reinforcement Learning (RL) experiment platforms demonstrate the advantage of the proposed algorithm over existing methods in accelerating and stabilizing the training process with heterogeneous data.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果