作者
Mohammed Hossny, Khaled Saleh, Mohammed Attia, Ahmed Abobakr, Julie Iskander
发表日期
2020/6/8
期刊
Computer Vision and Pattern Recognition, Image and Video Processing
简介
LiDAR data is becoming increasingly essential with the rise of autonomous vehicles. Its ability to provide 360deg horizontal field of view of point cloud, equips self-driving vehicles with enhanced situational awareness capabilities. While synthetic LiDAR data generation pipelines provide a good solution to advance the machine learning research on LiDAR, they do suffer from a major shortcoming, which is rendering time. Physically accurate LiDAR simulators (eg Blensor) are computationally expensive with an average rendering time of 14-60 seconds per frame for urban scenes. This is often compensated for via using 3D models with simplified polygon topology (low poly assets) as is the case of CARLA (Dosovitskiy et al., 2017). However, this comes at the price of having coarse grained unrealistic LiDAR point clouds. In this paper, we present a novel method to simulate LiDAR point cloud with faster rendering time of 1 sec per frame. The proposed method relies on spherical UV unwrapping of Equirectangular Z-Buffer images. We chose Blensor (Gschwandtner et al., 2011) as the baseline method to compare the point clouds generated using the proposed method. The reported error for complex urban landscapes is 4.28 cm for a scanning range between 2–120 meters with Velodyne HDL64-E2 parameters. The proposed method reported a total time per frame to 3.2±0.31 seconds per frame. In contrast, the BlenSor baseline method reported 16.2±1.82 seconds.
引用总数
学术搜索中的文章
M Hossny, K Saleh, M Attia, A Abobakr, J Iskander - Computer Vision and Pattern Recognition, Image and …, 2020