Toward computationally efficient inverse reinforcement learning via reward shaping

LH Cooke, H Klyne, E Zhang, C Laidlaw… - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2312.09983, 2023arxiv.org
Inverse reinforcement learning (IRL) is computationally challenging, with common
approaches requiring the solution of multiple reinforcement learning (RL) sub-problems.
This work motivates the use of potential-based reward shaping to reduce the computational
burden of each RL sub-problem. This work serves as a proof-of-concept and we hope will
inspire future developments towards computationally efficient IRL.
Inverse reinforcement learning (IRL) is computationally challenging, with common approaches requiring the solution of multiple reinforcement learning (RL) sub-problems. This work motivates the use of potential-based reward shaping to reduce the computational burden of each RL sub-problem. This work serves as a proof-of-concept and we hope will inspire future developments towards computationally efficient IRL.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果