The perception of the surrounding environment is a key requirement for autonomous driving systems, yet the computation of an accurate semantic representation of the scene starting …
As camera and LiDAR sensors capture complementary information in autonomous driving, great efforts have been made to conduct semantic segmentation through multi-modality data …
Z Zhuang, R Li, K Jia, Q Wang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Abstract 3D LiDAR (light detection and ranging) semantic segmentation is important in scene understanding for many applications, such as auto-driving and robotics. For example …
J Li, H Dai, H Han, Y Ding - … of the IEEE/CVF conference on …, 2023 - openaccess.thecvf.com
LiDAR and camera are two modalities available for 3D semantic segmentation in autonomous driving. The popular LiDAR-only methods severely suffer from inferior …
Recent works on 3D semantic segmentation propose to exploit the synergy between images and point clouds by processing each modality with a dedicated network and projecting …
Multi-modal 3D object detection has been an active research topic in autonomous driving. Nevertheless, it is non-trivial to explore the cross-modal feature fusion between sparse 3D …
Abstract Point-, voxel-, and range-views are three representative forms of point clouds. All of them have accurate 3D measurements but lack color and texture information. RGB images …
Although recent point cloud analysis achieves impressive progress, the paradigm of representation learning from single modality gradually meets its bottleneck. In this work, we …
Y Yang, W Zhai, H Luo, Y Cao… - Proceedings of the …, 2023 - openaccess.thecvf.com
Grounding 3D object affordance seeks to locate objects'" action possibilities" regions in the 3D space, which serves as a link between perception and operation for embodied agents …