Deep reinforcement learning-based multidimensional resource management for energy harvesting cognitive NOMA communications

Z Shi, X Xie, H Lu, H Yang, J Cai… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Z Shi, X Xie, H Lu, H Yang, J Cai, Z Ding
IEEE Transactions on Communications, 2021ieeexplore.ieee.org
The combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal
multiple access (NOMA) is a promising solution to improve energy efficiency and spectral
efficiency of the upcoming beyond fifth generation network (B5G), especially for support the
wireless sensor communications in Internet of things (IoT) system. However, how to realize
intelligent frequency, time, and energy resource allocation to support better performances is
an important problem to be solved. In this paper, we study joint spectrum, energy, and time …
The combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency and spectral efficiency of the upcoming beyond fifth generation network (B5G), especially for support the wireless sensor communications in Internet of things (IoT) system. However, how to realize intelligent frequency, time, and energy resource allocation to support better performances is an important problem to be solved. In this paper, we study joint spectrum, energy, and time resource management for the EH-CR-NOMA IoT systems. Our goal is to minimize the number of data packets losses for all secondary sensing users (SSU), while satisfying the constraints on the maximum charging battery capacity, maximum transmitting power, maximum buffer capacity, and minimum data rate of primary users (PU) and SSUs. Due to the non-convexity of this optimization problem and the stochastic nature of the wireless environment, we propose a distributed multidimensional resource management algorithm based on deep reinforcement learning (DRL). Considering the continuity of the resources to be managed, the deep deterministic policy gradient (DDPG) algorithm is adopted, based on which each agent (SSU) can manage its own multidimensional resources without collaboration. In addition, a simplified but practical action adjuster (AA) is introduced for improving the training efficiency and battery performance protection. The provided results show that the convergence speed of the proposed algorithm is about 4 times faster than that of DDPG, and the average number of packet losses (ANPL) is about 8 times lower than that of the greedy algorithm.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果