Multi-agent deep reinforcement learning based spectrum allocation for D2D underlay communications

Z Li, C Guo - IEEE Transactions on Vehicular Technology, 2019 - ieeexplore.ieee.org
IEEE Transactions on Vehicular Technology, 2019ieeexplore.ieee.org
Device-to-device (D2D) communication underlay cellular networks is a promising technique
to improve spectrum efficiency. In this situation, D2D transmission may cause severe
interference to both the cellular and other D2D links, which imposes a great technical
challenge to spectrum allocation. Existing centralized schemes require global information,
which causes a large signaling overhead. While existing distributed schemes requires
frequent information exchange among D2D users and cannot achieve global optimization. In …
Device-to-device (D2D) communication underlay cellular networks is a promising technique to improve spectrum efficiency. In this situation, D2D transmission may cause severe interference to both the cellular and other D2D links, which imposes a great technical challenge to spectrum allocation. Existing centralized schemes require global information, which causes a large signaling overhead. While existing distributed schemes requires frequent information exchange among D2D users and cannot achieve global optimization. In this paper, a distributed spectrum allocation framework based on multi-agent deep reinforcement learning is proposed, named multi-agent actor critic (MAAC). MAAC shares global historical states, actions and policies during centralized training, requires no signal interaction during execution and utilizes cooperation among users to further optimize system performance. Moreover, in order to decrease the computing complexity of the training, we further propose the neighbor-agent actor critic (NAAC) based on the neighbor users' historical information for centralized training. The simulation results show that the proposed MAAC and NAAC can effectively reduce the outage probability of cellular links, greatly improve the sum rate of D2D links and converge quickly.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果