St-p3: End-to-end vision-based autonomous driving via spatial-temporal feature learning

S Hu, L Chen, P Wu, H Li, J Yan, D Tao - European Conference on …, 2022 - Springer
Many existing autonomous driving paradigms involve a multi-stage discrete pipeline of
tasks. To better predict the control signals and enhance user safety, an end-to-end approach …

Genad: Generative end-to-end autonomous driving

W Zheng, R Song, X Guo, L Chen - arXiv preprint arXiv:2402.11502, 2024 - arxiv.org
Directly producing planning results from raw sensors has been a long-desired solution for
autonomous driving and has attracted increasing attention recently. Most existing end-to …

TBP-Former: Learning Temporal Bird's-Eye-View Pyramid for Joint Perception and Prediction in Vision-Centric Autonomous Driving

S Fang, Z Wang, Y Zhong, J Ge… - Proceedings of the …, 2023 - openaccess.thecvf.com
Vision-centric joint perception and prediction (PnP) has become an emerging trend in
autonomous driving research. It predicts the future states of the traffic participants in the …

Driveworld: 4d pre-trained scene understanding via world models for autonomous driving

C Min, D Zhao, L Xiao, J Zhao, X Xu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Vision-centric autonomous driving has recently raised wide attention due to its lower cost.
Pre-training is essential for extracting a universal representation. However current vision …

Neat: Neural attention fields for end-to-end autonomous driving

K Chitta, A Prakash, A Geiger - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Efficient reasoning about the semantic, spatial, and temporal structure of a scene is a crucial
prerequisite for autonomous driving. We present NEural ATtention fields (NEAT), a novel …

Vad: Vectorized scene representation for efficient autonomous driving

B Jiang, S Chen, Q Xu, B Liao, J Chen… - Proceedings of the …, 2023 - openaccess.thecvf.com
Autonomous driving requires a comprehensive understanding of the surrounding
environment for reliable trajectory planning. Previous works rely on dense rasterized scene …

Real-to-virtual domain unification for end-to-end autonomous driving

L Yang, X Liang, T Wang… - Proceedings of the …, 2018 - openaccess.thecvf.com
In the spectrum of vision-based autonomous driving, vanilla end-to-end models are not
interpretable and suboptimal in performance, while mediated perception models require …

Deep object-centric policies for autonomous driving

D Wang, C Devin, QZ Cai, F Yu… - … Conference on Robotics …, 2019 - ieeexplore.ieee.org
While learning visuomotor skills in an end-to-end manner is appealing, deep neural
networks are often uninterpretable and fail in surprising ways. For robotics tasks, such as …

Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird's-Eye View

J Yang, E Xie, M Liu, JM Alvarez - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Recent vision-only perception models for autonomous driving achieved promising results by
encoding multi-view image features into Bird's-Eye-View (BEV) space. A critical step and the …

Trajectory-guided control prediction for end-to-end autonomous driving: A simple yet strong baseline

P Wu, X Jia, L Chen, J Yan, H Li… - Advances in Neural …, 2022 - proceedings.neurips.cc
Current end-to-end autonomous driving methods either run a controller based on a planned
trajectory or perform control prediction directly, which have spanned two separately studied …