Driveirl: Drive in real life with inverse reinforcement learning

T Phan-Minh, F Howington, TS Chu… - … on Robotics and …, 2023 - ieeexplore.ieee.org
T Phan-Minh, F Howington, TS Chu, MS Tomov, RE Beaudoin, SU Lee, N Li, C Dicle
2023 IEEE International Conference on Robotics and Automation (ICRA), 2023ieeexplore.ieee.org
In this paper, we introduce the first published planner to drive a car in dense, urban traffic
using Inverse Reinforcement Learning (IRL). Our planner, DriveIRL, generates a diverse set
of trajectory proposals and scores them with a learned model. The best trajectory is tracked
by our self-driving vehicle's low-level controller. We train our trajectory scoring model on a
500+ hour real-world dataset of expert driving demonstrations in Las Vegas within the
maximum entropy IRL framework. DriveIRL's benefits include: a simple design due to only …
In this paper, we introduce the first published planner to drive a car in dense, urban traffic using Inverse Reinforcement Learning (IRL). Our planner, DriveIRL, generates a diverse set of trajectory proposals and scores them with a learned model. The best trajectory is tracked by our self-driving vehicle's low-level controller. We train our trajectory scoring model on a 500+ hour real-world dataset of expert driving demonstrations in Las Vegas within the maximum entropy IRL framework. DriveIRL's benefits include: a simple design due to only learning the trajectory scoring function, a flexible and relatively interpretable feature engineering approach, and strong real-world performance. We validated DriveIRL on the Las Vegas Strip and demonstrated fully autonomous driving in heavy traffic, including scenarios involving cut-ins, abrupt braking by the lead vehicle, and hotel pickup/dropoff zones. Our dataset, a part of nuPlan, has been released to the public to help further research in this area.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果