Hidden footprints: Learning contextual walkability from 3d human trails

J Sun, H Averbuch-Elor, Q Wang, N Snavely - Computer Vision–ECCV …, 2020 - Springer
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28 …, 2020Springer
Predicting where people can walk in a scene is important for many tasks, including
autonomous driving systems and human behavior analysis. Yet learning a computational
model for this purpose is challenging due to semantic ambiguity and a lack of labeled data:
current datasets only tell you where people are, not where they could be. We tackle this
problem by leveraging information from existing datasets, without additional labeling. We
first augment the set of valid, labeled walkable regions by propagating person observations …
Abstract
Predicting where people can walk in a scene is important for many tasks, including autonomous driving systems and human behavior analysis. Yet learning a computational model for this purpose is challenging due to semantic ambiguity and a lack of labeled data: current datasets only tell you where people are, not where they could be. We tackle this problem by leveraging information from existing datasets, without additional labeling. We first augment the set of valid, labeled walkable regions by propagating person observations between images, utilizing 3D information to create what we call hidden footprints. However, this augmented data is still sparse. We devise a training strategy designed for such sparse labels, combining a class-balanced classification loss with a contextual adversarial loss. Using this strategy, we demonstrate a model that learns to predict a walkability map from a single image. We evaluate our model on the Waymo and Cityscapes datasets, demonstrating superior performance compared to baselines and state-of-the-art models.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果