作者
Ghada AZN Sokar, Elena Mocanu, Decebal Constantin Mocanu, Mykola Pechenizkiy, Peter Stone
发表日期
2021/7/8
研讨会论文
Sparsity in Neural Networks: Advancing Understanding and Practice 2021
简介
Dynamic sparse training (DST) literature demonstrates that a highly sparse neural network can match the performance of its corresponding dense network in supervised and unsupervised learning when it is trained from scratch while substantially reducing the computational and memory costs. In this paper, we show for the first time that deep reinforcement learning can also benefit from dynamic sparse training. We demonstrate that DST can be leveraged to decrease the long training time required by deep reinforcement learning agents without sacrificing performance. To achieve this, we propose a DST algorithm that adapts to the online nature and instability of the deep reinforcement learning paradigm. We integrate our proposed algorithm with state-of-the-art deep reinforcement learning methods. Experimental results demonstrate that our dynamic sparse compact agents can effectively learn and achieve higher performance than the original dense methods while reducing the parameter count and floating-point operations (FLOPs) by 50%. More impressively, our dynamic sparse agents have a faster learning speed. They can reach the final performance achieved by dense agents after 40-50% of the steps required by the latter. We evaluate our approach on OpenAI gym continuous control tasks.
引用总数
学术搜索中的文章
GAZN Sokar, E Mocanu, DC Mocanu, M Pechenizkiy… - Sparsity in Neural Networks: Advancing Understanding …, 2021