An empirical dynamic programming algorithm for continuous MDPs

WB Haskell, R Jain, H Sharma, P Yu - arXiv preprint arXiv:1709.07506, 2017 - arxiv.org
WB Haskell, R Jain, H Sharma, P Yu
arXiv preprint arXiv:1709.07506, 2017arxiv.org
We propose universal randomized function approximation-based empirical value iteration
(EVI) algorithms for Markov decision processes. Theempirical'nature comes from each
iteration being done empirically from samples available from simulations of the next state.
This makes the Bellman operator a random operator. A parametric and a non-parametric
method for function approximation using a parametric function space and the Reproducing
Kernel Hilbert Space (RKHS) respectively are then combined with EVI. Both function spaces …
We propose universal randomized function approximation-based empirical value iteration (EVI) algorithms for Markov decision processes. The `empirical' nature comes from each iteration being done empirically from samples available from simulations of the next state. This makes the Bellman operator a random operator. A parametric and a non-parametric method for function approximation using a parametric function space and the Reproducing Kernel Hilbert Space (RKHS) respectively are then combined with EVI. Both function spaces have the universal function approximation property. Basis functions are picked randomly. Convergence analysis is done using a random operator framework with techniques from the theory of stochastic dominance. Finite time sample complexity bounds are derived for both universal approximate dynamic programming algorithms. Numerical experiments support the versatility and effectiveness of this approach.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果