RLlib: Abstractions for distributed reinforcement learning

E Liang, R Liaw, R Nishihara, P Moritz… - International …, 2018 - proceedings.mlr.press
International conference on machine learning, 2018proceedings.mlr.press
Reinforcement learning (RL) algorithms involve the deep nesting of highly irregular
computation patterns, each of which typically exhibits opportunities for distributed
computation. We argue for distributing RL components in a composable way by adapting
algorithms for top-down hierarchical control, thereby encapsulating parallelism and resource
requirements within short-running compute tasks. We demonstrate the benefits of this
principle through RLlib: a library that provides scalable software primitives for RL. These …
Abstract
Reinforcement learning (RL) algorithms involve the deep nesting of highly irregular computation patterns, each of which typically exhibits opportunities for distributed computation. We argue for distributing RL components in a composable way by adapting algorithms for top-down hierarchical control, thereby encapsulating parallelism and resource requirements within short-running compute tasks. We demonstrate the benefits of this principle through RLlib: a library that provides scalable software primitives for RL. These primitives enable a broad range of algorithms to be implemented with high performance, scalability, and substantial code reuse. RLlib is available as part of the open source Ray project at http://rllib. io/.
proceedings.mlr.press
以上显示的是最相近的搜索结果。 查看全部搜索结果