作者
Qingyuan Wu, Simon Sinong Zhan, Yixuan Wang, Yuhui Wang, Chung-Wei Lin, Chen Lv, Qi Zhu, Jürgen Schmidhuber, Chao Huang
发表日期
2024/2/5
期刊
Forty-first International Conference on Machine Learning
简介
Reinforcement learning (RL) is challenging in the common case of delays between events and their sensory perceptions. State-of-the-art (SOTA) state augmentation techniques either suffer from state space explosion or performance degeneration in stochastic environments. To address these challenges, we present a novel \textit{Auxiliary-Delayed Reinforcement Learning (AD-RL)} method that leverages auxiliary tasks involving short delays to accelerate RL with long delays, without compromising performance in stochastic environments. Specifically, AD-RL learns a value function for short delays and uses bootstrapping and policy improvement techniques to adjust it for long delays. We theoretically show that this can greatly reduce the sample complexity. On deterministic and stochastic benchmarks, our method significantly outperforms the SOTAs in both sample efficiency and policy performance. Code is available at https://github.com/QingyuanWuNothing/AD-RL.
引用总数
学术搜索中的文章
Q Wu, SS Zhan, Y Wang, CW Lin, C Lv, Q Zhu… - arXiv preprint arXiv:2402.03141, 2024
Q Wu, SS Zhan, Y Wang, Y Wang, CW Lin, C Lv, Q Zhu… - Forty-first International Conference on Machine …