A review of visual SLAM methods for autonomous driving vehicles

J Cheng, L Zhang, Q Chen, X Hu, J Cai - Engineering Applications of …, 2022 - Elsevier
Autonomous driving vehicles require both a precise localization and mapping solution in
different driving environment. In this context, Simultaneous Localization and Mapping …

[HTML][HTML] A review of multi-sensor fusion slam systems based on 3D LIDAR

X Xu, L Zhang, J Yang, C Cao, W Wang, Y Ran, Z Tan… - Remote Sensing, 2022 - mdpi.com
The ability of intelligent unmanned platforms to achieve autonomous navigation and
positioning in a large-scale environment has become increasingly demanding, in which …

Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam

C Campos, R Elvira, JJG Rodríguez… - IEEE Transactions …, 2021 - ieeexplore.ieee.org
This article presents ORB-SLAM3, the first system able to perform visual, visual-inertial and
multimap SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye …

[HTML][HTML] An overview on visual slam: From tradition to semantic

W Chen, G Shang, A Ji, C Zhou, X Wang, C Xu, Z Li… - Remote Sensing, 2022 - mdpi.com
Visual SLAM (VSLAM) has been developing rapidly due to its advantages of low-cost
sensors, the easy fusion of other sensors, and richer environmental information. Traditional …

Neuralrecon: Real-time coherent 3d reconstruction from monocular video

J Sun, Y Xie, L Chen, X Zhou… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
We present a novel framework named NeuralRecon for real-time 3D scene reconstruction
from a monocular video. Unlike previous methods that estimate single-view depth maps …

F-loam: Fast lidar odometry and mapping

H Wang, C Wang, CL Chen… - 2021 IEEE/RSJ …, 2021 - ieeexplore.ieee.org
Simultaneous Localization and Mapping (SLAM) has wide robotic applications such as
autonomous driving and unmanned aerial vehicles. Both computational efficiency and …

Collaborative multi-robot search and rescue: Planning, coordination, perception, and active vision

JP Queralta, J Taipalmaa, BC Pullinen, VK Sarker… - Ieee …, 2020 - ieeexplore.ieee.org
Search and rescue (SAR) operations can take significant advantage from supporting
autonomous or teleoperated robots and multi-robot systems. These can aid in mapping and …

GVINS: Tightly coupled GNSS–visual–inertial fusion for smooth and consistent state estimation

S Cao, X Lu, S Shen - IEEE Transactions on Robotics, 2022 - ieeexplore.ieee.org
Visual–inertial odometry (VIO) is known to suffer from drifting, especially over long-term runs.
In this article, we present GVINS, a nonlinear optimization-based system that tightly fuses …

D3vo: Deep depth, deep pose and deep uncertainty for monocular visual odometry

N Yang, L Stumberg, R Wang… - Proceedings of the …, 2020 - openaccess.thecvf.com
We propose D3VO as a novel framework for monocular visual odometry that exploits deep
networks on three levels--deep depth, pose and uncertainty estimation. We first propose a …

Openvins: A research platform for visual-inertial estimation

P Geneva, K Eckenhoff, W Lee, Y Yang… - … on Robotics and …, 2020 - ieeexplore.ieee.org
In this paper, we present an open platform, termed OpenVINS, for visual-inertial estimation
research for both the academic community and practitioners from industry. The open …