作者
Zetong Yang, Li Chen, Yanan Sun, Hongyang Li
发表日期
2024
研讨会论文
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
简介
In contrast to extensive studies on general vision pre-training for scalable visual autonomous driving remains seldom explored. Visual autonomous driving applications require features encompassing semantics 3D geometry and temporal information simultaneously for joint perception prediction and planning posing dramatic challenges for pre-training. To resolve this we bring up a new pre-training task termed as visual point cloud forecasting-predicting future point clouds from historical visual input. The key merit of this task captures the synergic learning of semantics 3D structures and temporal dynamics. Hence it shows superiority in various downstream tasks. To cope with this new problem we present ViDAR a general model to pre-train downstream visual encoders. It first extracts historical embeddings by the encoder. These representations are then transformed to 3D geometric space via a novel Latent Rendering operator for future point cloud prediction. Experiments show significant gain in downstream tasks eg 3.1% NDS on 3D detection 10% error reduction on motion forecasting and 15% less collision rate on planning.
引用总数
学术搜索中的文章
Z Yang, L Chen, Y Sun, H Li - Proceedings of the IEEE/CVF Conference on Computer …, 2024