On the complexity of solving Markov decision problems

ML Littman, TL Dean, LP Kaelbling - arXiv preprint arXiv:1302.4971, 2013 - arxiv.org
arXiv preprint arXiv:1302.4971, 2013arxiv.org
Markov decision problems (MDPs) provide the foundations for a number of problems of
interest to AI researchers studying automated planning and reinforcement learning. In this
paper, we summarize results regarding the complexity of solving MDPs and the running time
of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in
theory, more study is needed to reveal practical algorithms for solving large problems
quickly. To encourage future research, we sketch some alternative methods of analysis that …
Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in theory, more study is needed to reveal practical algorithms for solving large problems quickly. To encourage future research, we sketch some alternative methods of analysis that rely on the structure of MDPs.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果