Multi-view Vision-Prompt Fusion Network: Can 2D Pre-trained Model Boost 3D Point Cloud Data-scarce Learning?

H Peng, B Li, B Zhang, X Chen, T Chen… - arXiv preprint arXiv …, 2023 - arxiv.org
… We first encode a 3D point cloud to multiple views. Then, we propose a multi-view prompt
vision fusion module based on attention mechanism to exchange and fuse information from …

Vehicle detection and localization using 3d lidar point cloud and image semantic segmentation

R Barea, C Pérez, LM Bergasa… - 2018 21st …, 2018 - ieeexplore.ieee.org
… We propose a multimodal fusion framework that processes both 3D LIDAR point cloud and
RGB image to obtain robust vehicle position and size in a Bird’s Eye View (BEV). Semantic …

Fusing vision and lidar-synchronization, correction and occlusion reasoning

S Schneider, M Himmelsbach, T Luettel… - 2010 IEEE Intelligent …, 2010 - ieeexplore.ieee.org
… , the computer vision community as well … fusion of both sensors is reasonable in order to
provide color images with depth and reflectance information as well as 3D LIDAR point clouds

Clip2point: Transfer clip to point cloud classification with image-depth pre-training

T Huang, B Dong, Y Yang, X Huang… - … Computer Vision, 2023 - openaccess.thecvf.com
vision-language (VL) pre-training methods to 3D vision. … domain, and adapt it to point cloud
classification. We introduce a … aggregators and gated fusion for downstream representative …

Point cloud registration with 2D and 3D fusion information on mobile robot integrated vision system

X Zhang, L Li, D Tu - 2013 IEEE International Conference on …, 2013 - ieeexplore.ieee.org
… machine vision, the method based on images has been … both sparse visual features and
dense point clouds for frame-to-… The point cloud is registered with 2D and 3D fusion information. …

PointFusionNet: Point feature fusion network for 3D point clouds analysis

P Liang, Z Fang, B Huang, H Zhou, X Tang, C Zhong - Applied Intelligence, 2021 - Springer
… 2D computer vision tasks. However, CNNs are unable to process irregular unstructured data
like point clouds directly, how to extract meaningful information from point clouds to analysis …

[HTML][HTML] Virtual namesake point multi-source point cloud data fusion based on FPFH feature difference

L Zheng, Z Li - Sensors, 2021 - mdpi.com
point cloud and point cloud of different resolutions. In the case of noise and distortion in the
point cloudVision, Seoul, Korea, 27 October–2 November 2019; pp. 12–21. [Google Scholar] …

[HTML][HTML] 3D point cloud recognition based on a multi-view convolutional neural network

L Zhang, J Sun, Q Zheng - Sensors, 2018 - mdpi.com
… , it is mainly due to the fact that this paper presents a view fusion network which can make
full use of the information from multiple views. As can be seen in Table 4, the accuracy drops …

Bevfusion: A simple and robust lidar-camera fusion framework

T Liang, H Xie, K Yu, Z Xia, Z Lin… - Advances in …, 2022 - proceedings.neurips.cc
vision on-vehicle perception systems, LiDAR and camera are usually the two most critical
sensors that provide accurate point cloud … to classify objects on point clouds when LiDAR does …

Super-sensor for 360-degree environment perception: Point cloud segmentation using image features

R Varga, A Costea, H Florea, I Giosan… - 2017 IEEE 20th …, 2017 - ieeexplore.ieee.org
… The fusion of these modalities increases the dimensionality … In contrast, a 6D-vision approach
[2] computes the 3D scene … for obtaining the low-level fusion of LIDAR and camera data. …