Gpu-accelerated robotic simulation for distributed reinforcement learning

J Liang, V Makoviychuk, A Handa… - … on Robot Learning, 2018 - proceedings.mlr.press
Conference on Robot Learning, 2018proceedings.mlr.press
Abstract Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively
large number of training samples for learning complex tasks. Many recent works on
speeding up Deep RL have focused on distributed training and simulation. While distributed
training is often done on the GPU, simulation is not. In this work, we propose using GPU-
accelerated RL simulations as an alternative to CPU ones. Using NVIDIA Flex, a GPU-based
physics engine, we show promising speed-ups of learning various continuous-control …
Abstract
Most Deep Reinforcement Learning (Deep RL) algorithms require a prohibitively large number of training samples for learning complex tasks. Many recent works on speeding up Deep RL have focused on distributed training and simulation. While distributed training is often done on the GPU, simulation is not. In this work, we propose using GPU-accelerated RL simulations as an alternative to CPU ones. Using NVIDIA Flex, a GPU-based physics engine, we show promising speed-ups of learning various continuous-control, locomotion tasks. With one GPU and CPU core, we are able to train the Humanoid running task in less than 20 minutes, using 10-1000x fewer CPU cores than previous works. We also demonstrate the scalability of our simulator to multi-GPU settings to train more challenging locomotion tasks.
proceedings.mlr.press
以上显示的是最相近的搜索结果。 查看全部搜索结果