Deep global-relative networks for end-to-end 6-dof visual localization and odometry

Y Lin, Z Liu, J Huang, C Wang, G Du, J Bai… - Pacific Rim International …, 2019 - Springer
Y Lin, Z Liu, J Huang, C Wang, G Du, J Bai, S Lian
Pacific Rim International Conference on Artificial Intelligence, 2019Springer
Although a wide variety of deep neural networks for robust Visual Odometry (VO) can be
found in the literature, they are still unable to solve the drift problem in long-term robot
navigation. Thus, this paper aims to propose novel deep end-to-end networks for long-term
6-DoF VO task. It mainly fuses relative and global networks based on Recurrent
Convolutional Neural Networks (RCNNs) to improve the monocular localization accuracy.
Indeed, the relative sub-networks are implemented to smooth the VO trajectory, while global …
Abstract
Although a wide variety of deep neural networks for robust Visual Odometry (VO) can be found in the literature, they are still unable to solve the drift problem in long-term robot navigation. Thus, this paper aims to propose novel deep end-to-end networks for long-term 6-DoF VO task. It mainly fuses relative and global networks based on Recurrent Convolutional Neural Networks (RCNNs) to improve the monocular localization accuracy. Indeed, the relative sub-networks are implemented to smooth the VO trajectory, while global sub-networks are designed to avoid drift problem. All the parameters are jointly optimized using Cross Transformation Constraints (CTC), which represents temporal geometric consistency of the consecutive frames, and Mean Square Error (MSE) between the predicted pose and ground truth. The experimental results on both indoor and outdoor datasets show that our method outperforms other state-of-the-art learning-based VO methods in terms of pose accuracy.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果