作者
Eshed Ohn-Bar, Aditya Prakash, Aseem Behl, Kashyap Chitta, Andreas Geiger
发表日期
2020
研讨会论文
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
页码范围
11296-11305
简介
Human drivers have a remarkable ability to drive in diverse visual conditions and situations, eg, from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, ie, supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, eg, perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.
引用总数
2020202120222023202451822209
学术搜索中的文章
E Ohn-Bar, A Prakash, A Behl, K Chitta, A Geiger - Proceedings of the IEEE/CVF Conference on Computer …, 2020