Finite-sample analysis of proximal gradient td algorithms

B Liu, J Liu, M Ghavamzadeh, S Mahadevan… - arXiv preprint arXiv …, 2020 - arxiv.org
arXiv preprint arXiv:2006.14364, 2020arxiv.org
In this paper, we analyze the convergence rate of the gradient temporal difference learning
(GTD) family of algorithms. Previous analyses of this class of algorithms use ODE
techniques to prove asymptotic convergence, and to the best of our knowledge, no finite-
sample analysis has been done. Moreover, there has been not much work on finite-sample
analysis for convergent off-policy reinforcement learning algorithms. In this paper, we
formulate GTD methods as stochastic gradient algorithms wrt~ a primal-dual saddle-point …
In this paper, we analyze the convergence rate of the gradient temporal difference learning (GTD) family of algorithms. Previous analyses of this class of algorithms use ODE techniques to prove asymptotic convergence, and to the best of our knowledge, no finite-sample analysis has been done. Moreover, there has been not much work on finite-sample analysis for convergent off-policy reinforcement learning algorithms. In this paper, we formulate GTD methods as stochastic gradient algorithms w.r.t.~a primal-dual saddle-point objective function, and then conduct a saddle-point error analysis to obtain finite-sample bounds on their performance. Two revised algorithms are also proposed, namely projected GTD2 and GTD2-MP, which offer improved convergence guarantees and acceleration, respectively. The results of our theoretical analysis show that the GTD family of algorithms are indeed comparable to the existing LSTD methods in off-policy learning scenarios.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果