Catalyst. rl: a distributed framework for reproducible rl research

S Kolesnikov, O Hrinchuk - arXiv preprint arXiv:1903.00027, 2019 - arxiv.org
arXiv preprint arXiv:1903.00027, 2019arxiv.org
Despite the recent progress in deep reinforcement learning field (RL), and, arguably
because of it, a large body of work remains to be done in reproducing and carefully
comparing different RL algorithms. We present catalyst. RL, an open source framework for
RL research with a focus on reproducibility and flexibility. Main features of our library include
large-scale asynchronous distributed training, easy-to-use configuration files with the
complete list of hyperparameters for the particular experiments, efficient implementations of …
Despite the recent progress in deep reinforcement learning field (RL), and, arguably because of it, a large body of work remains to be done in reproducing and carefully comparing different RL algorithms. We present catalyst.RL, an open source framework for RL research with a focus on reproducibility and flexibility. Main features of our library include large-scale asynchronous distributed training, easy-to-use configuration files with the complete list of hyperparameters for the particular experiments, efficient implementations of various RL algorithms and auxiliary tricks, such as frame stacking, n-step returns, value distributions, etc. To vindicate the usefulness of our framework, we evaluate it on a range of benchmarks in a continuous control, as well as on the task of developing a controller to enable a physiologically-based human model with a prosthetic leg to walk and run. The latter task was introduced at NeurIPS 2018 AI for Prosthetics Challenge, where our team took the 3rd place, capitalizing on the ability of catalyst.RL to train high-quality and sample-efficient RL agents.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果