作者
Andres Milioto, Ignacio Vizzo, Jens Behley, Cyrill Stachniss
发表日期
2019/11/3
研讨会论文
2019 IEEE/RSJ international conference on intelligent robots and systems (IROS)
页码范围
4213-4220
出版商
IEEE
简介
Perception in autonomous vehicles is often carried out through a suite of different sensing modalities. Given the massive amount of openly available labeled RGB data and the advent of high-quality deep learning algorithms for image-based recognition, high-level semantic perception tasks are pre-dominantly solved using high-resolution cameras. As a result of that, other sensor modalities potentially useful for this task are often ignored. In this paper, we push the state of the art in LiDAR-only semantic segmentation forward in order to provide another independent source of semantic information to the vehicle. Our approach can accurately perform full semantic segmentation of LiDAR point clouds at sensor frame rate. We exploit range images as an intermediate representation in combination with a Convolutional Neural Network (CNN) exploiting the rotating LiDAR sensor model. To obtain accurate results, we …
引用总数
学术搜索中的文章
A Milioto, I Vizzo, J Behley, C Stachniss - 2019 IEEE/RSJ international conference on intelligent …, 2019