Deep reinforcement learning with optimized reward functions for robotic trajectory planning

J Xie, Z Shao, Y Li, Y Guan, J Tan - IEEE Access, 2019 - ieeexplore.ieee.org
J Xie, Z Shao, Y Li, Y Guan, J Tan
IEEE Access, 2019ieeexplore.ieee.org
To improve the efficiency of deep reinforcement learning (DRL)-based methods for robotic
trajectory planning in the unstructured working environment with obstacles. Different from
the traditional sparse reward function, this paper presents two brand-new dense reward
functions. First, the azimuth reward function is proposed to accelerate the learning process
locally with a more reasonable trajectory by modeling the position and orientation
constraints, which can reduce the blindness of exploration dramatically. To further improve …
To improve the efficiency of deep reinforcement learning (DRL)-based methods for robotic trajectory planning in the unstructured working environment with obstacles. Different from the traditional sparse reward function, this paper presents two brand-new dense reward functions. First, the azimuth reward function is proposed to accelerate the learning process locally with a more reasonable trajectory by modeling the position and orientation constraints, which can reduce the blindness of exploration dramatically. To further improve the efficiency, a reward function at subtask-level is proposed to provide global guidance for the agent in the DRL. The subtask-level reward function is designed under the assumption that the task can be divided into several subtasks, which reduces the invalid exploration greatly. The extensive experiments show that the proposed reward functions are able to improve the convergence rate by up to three times with the state-of-the-art DRL methods. The percentage increase in convergence means is 2.25%–13.22% and the percentage decreases with respect to standard deviation by 10.8%–74.5%.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果