In this paper, we investigate the application of Vehicle-to-Everything (V2X) communication to improve the perception performance of autonomous vehicles. We present a robust …
As camera and LiDAR sensors capture complementary information in autonomous driving, great efforts have been made to conduct semantic segmentation through multi-modality data …
With the development of deep representation learning, the domain of reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies …
J Li, H Dai, H Han, Y Ding - … of the IEEE/CVF conference on …, 2023 - openaccess.thecvf.com
LiDAR and camera are two modalities available for 3D semantic segmentation in autonomous driving. The popular LiDAR-only methods severely suffer from inferior …
Z Zhuang, R Li, K Jia, Q Wang… - Proceedings of the …, 2021 - openaccess.thecvf.com
Abstract 3D LiDAR (light detection and ranging) semantic segmentation is important in scene understanding for many applications, such as auto-driving and robotics. For example …
Multi-agent collaborative perception as a potential application for vehicle-to-everything communication could significantly improve the perception performance of autonomous …
L Xiang, D Wang - Smart Agricultural Technology, 2023 - Elsevier
In recent years, three-dimensional (3D) machine vision techniques have been widely employed in agriculture and food systems, leveraging advanced deep learning …
Test-time adaptation approaches have recently emerged as a practical solution for handling domain shift without access to the source domain data. In this paper, we propose and …
Abstract Point-, voxel-, and range-views are three representative forms of point clouds. All of them have accurate 3D measurements but lack color and texture information. RGB images …