Active model estimation in markov decision processes

J Tarbouriech, S Shekhar, M Pirotta… - … on Uncertainty in …, 2020 - proceedings.mlr.press
Conference on Uncertainty in Artificial Intelligence, 2020proceedings.mlr.press
We study the problem of efficient exploration in order to learn an accurate model of an
environment, modeled as a Markov decision process (MDP). Efficient exploration in this
problem requires the agent to identify the regions in which estimating the model is more
difficult and then exploit this knowledge to collect more samples there. In this paper, we
formalize this problem, introduce the first algorithm to learn an $\epsilon $-accurate estimate
of the dynamics, and provide its sample complexity analysis. While this algorithm enjoys …
Abstract
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP). Efficient exploration in this problem requires the agent to identify the regions in which estimating the model is more difficult and then exploit this knowledge to collect more samples there. In this paper, we formalize this problem, introduce the first algorithm to learn an -accurate estimate of the dynamics, and provide its sample complexity analysis. While this algorithm enjoys strong guarantees in the large-sample regime, it tends to have a poor performance in early stages of exploration. To address this issue, we propose an algorithm that is based on maximum weighted entropy, a heuristic that stems from common sense and our theoretical analysis. The main idea here is to cover the entire state-action space with the weight proportional to the noise in their transition functions. Using a number of simple domains with heterogeneous noise in their transitions, we show that our heuristic-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime, while achieving similar asymptotic performance as that of the original algorithm.
proceedings.mlr.press
以上显示的是最相近的搜索结果。 查看全部搜索结果