Motionnet: Joint perception and motion prediction for autonomous driving based on bird's eye view maps

P Wu, S Chen, DN Metaxas - Proceedings of the IEEE/CVF …, 2020 - openaccess.thecvf.com
The ability to reliably perceive the environmental states, particularly the existence of objects
and their motion behavior, is crucial for autonomous driving. In this work, we propose an …

Beverse: Unified perception and prediction in birds-eye-view for vision-centric autonomous driving

Y Zhang, Z Zhu, W Zheng, J Huang, G Huang… - arXiv preprint arXiv …, 2022 - arxiv.org
In this paper, we present BEVerse, a unified framework for 3D perception and prediction
based on multi-camera systems. Unlike existing studies focusing on the improvement of …

Driveworld: 4d pre-trained scene understanding via world models for autonomous driving

C Min, D Zhao, L Xiao, J Zhao, X Xu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Vision-centric autonomous driving has recently raised wide attention due to its lower cost.
Pre-training is essential for extracting a universal representation. However current vision …

Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers

Z Li, W Wang, H Li, E Xie, C Sima, T Lu, Y Qiao… - European conference on …, 2022 - Springer
Abstract 3D visual perception tasks, including 3D detection and map segmentation based on
multi-camera images, are essential for autonomous driving systems. In this work, we present …

BEVFusion4D: Learning LiDAR-Camera Fusion Under Bird's-Eye-View via Cross-Modality Guidance and Temporal Aggregation

H Cai, Z Zhang, Z Zhou, Z Li, W Ding, J Zhao - arXiv preprint arXiv …, 2023 - arxiv.org
Integrating LiDAR and Camera information into Bird's-Eye-View (BEV) has become an
essential topic for 3D object detection in autonomous driving. Existing methods mostly adopt …

Are we ready for vision-centric driving streaming perception? the asap benchmark

X Wang, Z Zhu, Y Zhang, G Huang… - Proceedings of the …, 2023 - openaccess.thecvf.com
In recent years, vision-centric perception has flourished in various autonomous driving tasks,
including 3D detection, semantic map construction, motion forecasting, and depth …

One million scenes for autonomous driving: Once dataset

J Mao, M Niu, C Jiang, H Liang, J Chen, X Liang… - arXiv preprint arXiv …, 2021 - arxiv.org
Current perception models in autonomous driving have become notorious for greatly relying
on a mass of annotated data to cover unseen cases and address the long-tail problem. On …

Surroundocc: Multi-camera 3d occupancy prediction for autonomous driving

Y Wei, L Zhao, W Zheng, Z Zhu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract 3D scene understanding plays a vital role in vision-based autonomous driving.
While most existing methods focus on 3D object detection, they have difficulty describing …

Laformer: Trajectory prediction for autonomous driving with lane-aware scene constraints

M Liu, H Cheng, L Chen, H Broszio… - Proceedings of the …, 2024 - openaccess.thecvf.com
Existing trajectory prediction methods for autonomous driving typically rely on one-stage
trajectory prediction models which condition future trajectories on observed trajectories …

Vip3d: End-to-end visual trajectory prediction via 3d agent queries

J Gu, C Hu, T Zhang, X Chen, Y Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Perception and prediction are two separate modules in the existing autonomous driving
systems. They interact with each other via hand-picked features such as agent bounding …