Multimodal PointPillars for Efficient Object Detection in Autonomous Vehicles

M Oliveira, R Cerqueira, JR Pinto… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Autonomous Vehicles aim to understand their surrounding environment by detecting
relevant objects in the scene, which can be performed using a combination of sensors. The …

V2VFormer: Multi-Modal Vehicle-to-Vehicle Cooperative Perception via Global-Local Transformer

H Yin, D Tian, C Lin, X Duan, J Zhou… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
Multi-vehicle cooperative perception has recently emerged for facilitating long-range and
large-scale perception ability of connected automated vehicles (CAVs). Nonetheless …

Real-time driving scene understanding via efficient 3-D LiDAR processing

W Jang, M Park, E Kim - IEEE Transactions on Instrumentation …, 2022 - ieeexplore.ieee.org
The 3-D light detection and ranging (3-D LIDAR) sensors are widely used in autonomous
vehicles; however, their drawback is the significant computation processing requirement …

All-in-one drive: A comprehensive perception dataset with high-density long-range point clouds

X Weng, Y Man, J Park, Y Yuan, M O'Toole, KM Kitani - 2021 - openreview.net
Developing datasets that cover comprehensive sensors, annotations, and out-of-distribution
data is important for innovating robust multi-sensor multi-task perception systems in …

Crn: Camera radar net for accurate, robust, efficient 3d perception

Y Kim, J Shin, S Kim, IJ Lee… - Proceedings of the …, 2023 - openaccess.thecvf.com
Autonomous driving requires an accurate and fast 3D perception system that includes 3D
object detection, tracking, and segmentation. Although recent low-cost camera-based …

Towards autonomous driving: a multi-modal 360 perception proposal

J Beltrán, C Guindel, I Cortés, A Barrera… - 2020 IEEE 23rd …, 2020 - ieeexplore.ieee.org
In this paper, a multi-modal 360° framework for 3D object detection and tracking for
autonomous vehicles is presented. The process is divided into four main stages. First …

CoBEVFusion: Cooperative Perception with LiDAR-Camera Bird's-Eye View Fusion

D Qiao, F Zulkernine - arXiv preprint arXiv:2310.06008, 2023 - arxiv.org
Autonomous Vehicles (AVs) use multiple sensors to gather information about their
surroundings. By sharing sensor data between Connected Autonomous Vehicles (CAVs) …

LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for Place Recognition

Z Zhou, J Xu, G Xiong, J Ma - IEEE Robotics and Automation …, 2023 - ieeexplore.ieee.org
Place recognition is one of the most crucial modules for autonomous vehicles to identify
places that were previously visited in GPS-invalid environments. Sensor fusion is …

MENet: Multi-Modal Mapping Enhancement Network for 3D Object Detection in Autonomous Driving

M Liu, Y Chen, J Xie, Y Zhu, Y Zhang… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
To achieve more accurate perception performance, LiDAR and camera are gradually
chosen to improve 3D object detection simultaneously. However, it is still a non-trivial task to …

Opv2v: An open benchmark dataset and fusion pipeline for perception with vehicle-to-vehicle communication

R Xu, H Xiang, X Xia, X Han, J Li… - … Conference on Robotics …, 2022 - ieeexplore.ieee.org
Employing Vehicle-to-Vehicle communication to enhance perception performance in self-
driving technology has attracted considerable attention recently; however, the absence of a …