[HTML][HTML] Reward criteria impact on the performance of reinforcement learning agent for autonomous navigation

A Dayal, LR Cenkeramaddi, A Jha - Applied Soft Computing, 2022 - Elsevier
Applied Soft Computing, 2022Elsevier
In reinforcement learning, an agent takes action at every time step (follows a policy) in an
environment to maximize the expected cumulative reward. Therefore, the shaping of a
reward function plays a crucial role in an agent's learning. Designing an optimal reward
function is not a trivial task. In this article, we propose a reward criterion using which we
develop different reward functions. The reward criterion chosen is based on the percentage
of positive and negative rewards received by an agent. This reward criteria further gives rise …
Abstract
In reinforcement learning, an agent takes action at every time step (follows a policy) in an environment to maximize the expected cumulative reward. Therefore, the shaping of a reward function plays a crucial role in an agent’s learning. Designing an optimal reward function is not a trivial task. In this article, we propose a reward criterion using which we develop different reward functions. The reward criterion chosen is based on the percentage of positive and negative rewards received by an agent. This reward criteria further gives rise to three different classes, ‘Balanced Class,’ ‘Skewed Positive Class,’ and ‘Skewed Negative Class.’ We train a Deep Q-Network agent on a point-goal based navigation task using the different reward classes. We also compare the performance of the proposed classes with a benchmark class. Based on the experiments, the skewed negative class outperforms the benchmark class by achieving very less variance. On the other hand, the benchmark class converges relatively faster than the skewed negative class.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果