Multi-objective optimization for UAV-assisted wireless powered IoT networks based on extended DDPG algorithm

Y Yu, J Tang, J Huang, X Zhang… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
IEEE Transactions on Communications, 2021ieeexplore.ieee.org
This paper studies an unmanned aerial vehicle (UAV)-assisted wireless powered IoT
network, where a rotary-wing UAV adopts fly-hover-communicate protocol to successively
visit IoT devices in demand. During the hovering periods, the UAV works on full-duplex
mode to simultaneously collect data from the target device and charge other devices within
its coverage. Practical propulsion power consumption model and non-linear energy
harvesting model are taken into account. We formulate a multi-objective optimization …
This paper studies an unmanned aerial vehicle (UAV)-assisted wireless powered IoT network, where a rotary-wing UAV adopts fly-hover-communicate protocol to successively visit IoT devices in demand. During the hovering periods, the UAV works on full-duplex mode to simultaneously collect data from the target device and charge other devices within its coverage. Practical propulsion power consumption model and non-linear energy harvesting model are taken into account. We formulate a multi-objective optimization problem to jointly optimize three objectives: maximization of sum data rate, maximization of total harvested energy and minimization of UAV's energy consumption over a particular mission period. These three objectives are in conflict with each other partly and weight parameters are given to describe associated importance. Since IoT devices keep gathering information from the physical surrounding environment and their requirements to upload data change dynamically, online path planning of the UAV is required. In this paper, we apply deep reinforcement learning algorithm to achieve online decision. An extended deep deterministic policy gradient (DDPG) algorithm is proposed to learn control policies of UAV over multiple objectives. While training, the agent learns to produce optimal policies under given weights conditions on the basis of achieving timely data collection according to the requirement priority and avoiding devices' data overflow. The verification results show that the proposed MODDPG (multi-objective DDPG) algorithm achieves joint optimization of three objectives and optimal policies can be adjusted according to weight parameters among optimization objectives.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果