Softmax exploration strategies for multiobjective reinforcement learning

P Vamplew, R Dazeley, C Foale - Neurocomputing, 2017 - Elsevier
Neurocomputing, 2017Elsevier
Despite growing interest over recent years in applying reinforcement learning to
multiobjective problems, there has been little research into the applicability and
effectiveness of exploration strategies within the multiobjective context. This work considers
several widely-used approaches to exploration from the single-objective reinforcement
learning literature, and examines their incorporation into multiobjective Q-learning. In
particular this paper proposes two novel approaches which extend the softmax operator to …
Abstract
Despite growing interest over recent years in applying reinforcement learning to multiobjective problems, there has been little research into the applicability and effectiveness of exploration strategies within the multiobjective context. This work considers several widely-used approaches to exploration from the single-objective reinforcement learning literature, and examines their incorporation into multiobjective Q-learning. In particular this paper proposes two novel approaches which extend the softmax operator to work with vector-valued rewards. The performance of these exploration strategies is evaluated across a set of benchmark environments. Issues arising from the multiobjective formulation of these benchmarks which impact on the performance of the exploration strategies are identified. It is shown that of the techniques considered, the combination of the novel softmax–epsilon exploration with optimistic initialisation provides the most effective trade-off between exploration and exploitation.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果