作者
Huaqing Xiong, Lin Zhao, Yingbin Liang, Wei Zhang
发表日期
2020/12
研讨会论文
Advances in Neural Information Processing Systems (NeurIPS), 2020 (spotlight)
简介
Although Q-learning is one of the most successful algorithms for finding the best action-value function (and thus the optimal policy) in reinforcement learning, its implementation often suffers from large overestimation of Q-function values incurred by random sampling. The double Q-learning algorithm proposed in~\citet {hasselt2010double} overcomes such an overestimation issue by randomly switching the update between two Q-estimators, and has thus gained significant popularity in practice. However, the theoretical understanding of double Q-learning is rather limited. So far only the asymptotic convergence has been established, which does not characterize how fast the algorithm converges. In this paper, we provide the first non-asymptotic (ie, finite-time) analysis for double Q-learning. We show that both synchronous and asynchronous double Q-learning are guaranteed to converge to an -accurate neighborhood of the global optimum by taking iterations, where is the decay parameter of the learning rate, and is the discount factor. Our analysis develops novel techniques to derive finite-time bounds on the difference between two inter-connected stochastic processes, which is new to the literature of stochastic approximation.
引用总数
20202021202220232024377107
学术搜索中的文章
H Xiong, L Zhao, Y Liang, W Zhang - Advances in neural information processing systems, 2020