J-MOD2: Joint Monocular Obstacle Detection and Depth Estimation

M Mancini, G Costante, P Valigi… - IEEE Robotics and …, 2018 - ieeexplore.ieee.org
In this letter, we propose an end-to-end deep architecture that jointly learns to detect
obstacles and estimate their depth for MAV flight applications. Most of the existing …

[PDF][PDF] Vision based Indoor Obstacle Avoidance using a Deep Convolutional Neural Network.

MO Khan, GB Parker - IJCCI, 2019 - pdfs.semanticscholar.org
A robust obstacle avoidance control program was developed for a mobile robot in the
context of tight, dynamic indoor environments. Deep Learning was applied in order to …

An Unmanned Aerial Vehicle Indoor Low-Computation Navigation Method Based on Vision and Deep Learning

TL Hsieh, ZS Jhan, NJ Yeh, CY Chen, CT Chuang - Sensors, 2023 - mdpi.com
Recently, unmanned aerial vehicles (UAVs) have found extensive indoor applications. In
numerous indoor UAV scenarios, navigation paths remain consistent. While many indoor …

3D LiDAR-based obstacle detection and tracking for autonomous navigation in dynamic environments

A Saha, BC Dhara - International Journal of Intelligent Robotics and …, 2024 - Springer
An accurate perception with a rapid response is fundamental for any autonomous vehicle to
navigate safely. Light detection and ranging (LiDAR) sensors provide an accurate estimation …

Object-sensitive potential fields for mobile robot navigation and mapping in indoor environments

J Chen, P Kim, YK Cho, J Ueda - 2018 15th International …, 2018 - ieeexplore.ieee.org
Mobile robots may be deployed in indoor environments for numerous tasks such as object
manipulation, search and rescue, and infrastructure mapping. To be safely deployed in …

Small imaging depth LIDAR and DCNN-based localization for automated guided vehicle

S Ito, S Hiratsuka, M Ohta, H Matsubara, M Ogawa - Sensors, 2018 - mdpi.com
We present our third prototype sensor and a localization method for Automated Guided
Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion …

Mobile robot navigation in unknown corridors using line and dense features of point clouds

K Qian, Z Chen, X Ma, B Zhou - IECON 2015-41st Annual …, 2015 - ieeexplore.ieee.org
This paper addresses the problem of mobile robot navigation in unknown corridors using
RGB-Depth cameras. Instead of building a full and global 3D map of the environment, the …

Densifying SLAM for UAV navigation by fusion of monocular depth prediction

Y Habib, P Papadakis, C Le Barz… - 2023 9th …, 2023 - ieeexplore.ieee.org
Simultaneous Localization and Mapping (SLAM) research has reached a level of maturity
enabling systems to build autonomously an accurate sparse map of the environment while …

[引用][C] Blind navigation using deep learning-based obstacle detection

W Gunethilake - 2021

Spatio-temporal fusion of LiDAR and camera data for omnidirectional depth perception

L Zhang, X Yu, Y Adu-Gyamfi… - Transportation research …, 2024 - journals.sagepub.com
Object recognition and depth perception are two tightly coupled tasks that are indispensable
for situational awareness. Most autonomous systems are able to perform these tasks by …