Vision-based semantic segmentation in scene understanding for autonomous driving: Recent achievements, challenges, and outlooks

K Muhammad, T Hussain, H Ullah… - IEEE Transactions …, 2022 - ieeexplore.ieee.org
Scene understanding plays a crucial role in autonomous driving by utilizing sensory data for
contextual information extraction and decision making. Beyond modeling advances, the …

Multimodal semantic segmentation in autonomous driving: A review of current approaches and future perspectives

G Rizzoli, F Barbato, P Zanuttigh - Technologies, 2022 - mdpi.com
The perception of the surrounding environment is a key requirement for autonomous driving
systems, yet the computation of an accurate semantic representation of the scene starting …

2dpass: 2d priors assisted semantic segmentation on lidar point clouds

X Yan, J Gao, C Zheng, C Zheng, R Zhang… - … on Computer Vision, 2022 - Springer
As camera and LiDAR sensors capture complementary information in autonomous driving,
great efforts have been made to conduct semantic segmentation through multi-modality data …

Perception-aware multi-sensor fusion for 3d lidar semantic segmentation

Z Zhuang, R Li, K Jia, Q Wang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Abstract 3D LiDAR (light detection and ranging) semantic segmentation is important in
scene understanding for many applications, such as auto-driving and robotics. For example …

Mseg3d: Multi-modal 3d semantic segmentation for autonomous driving

J Li, H Dai, H Han, Y Ding - … of the IEEE/CVF conference on …, 2023 - openaccess.thecvf.com
LiDAR and camera are two modalities available for 3D semantic segmentation in
autonomous driving. The popular LiDAR-only methods severely suffer from inferior …

Learning multi-view aggregation in the wild for large-scale 3d semantic segmentation

D Robert, B Vallet, L Landrieu - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Recent works on 3D semantic segmentation propose to exploit the synergy between images
and point clouds by processing each modality with a dedicated network and projecting …

Homogeneous multi-modal feature fusion and interaction for 3d object detection

X Li, B Shi, Y Hou, X Wu, T Ma, Y Li, L He - European Conference on …, 2022 - Springer
Multi-modal 3D object detection has been an active research topic in autonomous driving.
Nevertheless, it is non-trivial to explore the cross-modal feature fusion between sparse 3D …

Uniseg: A unified multi-modal lidar segmentation network and the openpcseg codebase

Y Liu, R Chen, X Li, L Kong, Y Yang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Point-, voxel-, and range-views are three representative forms of point clouds. All of
them have accurate 3D measurements but lack color and texture information. RGB images …

Let images give you more: Point cloud cross-modal training for shape analysis

X Yan, H Zhan, C Zheng, J Gao… - Advances in Neural …, 2022 - proceedings.neurips.cc
Although recent point cloud analysis achieves impressive progress, the paradigm of
representation learning from single modality gradually meets its bottleneck. In this work, we …

Grounding 3d object affordance from 2d interactions in images

Y Yang, W Zhai, H Luo, Y Cao… - Proceedings of the …, 2023 - openaccess.thecvf.com
Grounding 3D object affordance seeks to locate objects'" action possibilities" regions in the
3D space, which serves as a link between perception and operation for embodied agents …