Bootstrapping with models: Confidence intervals for off-policy evaluation

J Hanna, P Stone, S Niekum - Proceedings of the AAAI Conference on …, 2017 - ojs.aaai.org
Proceedings of the AAAI Conference on Artificial Intelligence, 2017ojs.aaai.org
In many reinforcement learning applications, it is desirable to determine confidence interval
lower bounds on the performance of any given policy without executing said policy. In this
context, we propose two bootstrapping off-policy evaluation methods which use learned
MDP transition models in order to estimate lower confidence bounds on policy performance
with limited data. We empirically evaluate the proposed methods in a standard policy
evaluation tasks.
Abstract
In many reinforcement learning applications, it is desirable to determine confidence interval lower bounds on the performance of any given policy without executing said policy. In this context, we propose two bootstrapping off-policy evaluation methods which use learned MDP transition models in order to estimate lower confidence bounds on policy performance with limited data. We empirically evaluate the proposed methods in a standard policy evaluation tasks.
ojs.aaai.org
以上显示的是最相近的搜索结果。 查看全部搜索结果