Deep sensor data fusion for environmental perception of automated systems

F Duffhauß - 2024 - tobias-lib.ub.uni-tuebingen.de
Automated systems require a comprehensive understanding of their surroundings to safely
interact with the environment. By effectively fusing sensory information from multiple sensors …

Indoor Semantic Scene Understanding using Multi-modality Fusion

M Gopinathan, G Truong, J Abu-Khalaf - arXiv preprint arXiv:2108.07616, 2021 - arxiv.org
Seamless Human-Robot Interaction is the ultimate goal of developing service robotic
systems. For this, the robotic agents have to understand their surroundings to better …

[HTML][HTML] Deep learning for LiDAR-only and LiDAR-fusion 3D perception: A survey

D Wu, Z Liang, G Chen - Intelligence & Robotics, 2022 - oaepublish.com
The perception system for robotics and autonomous cars relies on the collaboration among
multiple types of sensors to understand the surrounding environment. LiDAR has shown …

Hybrid6D: A Dual-Stream Transformer-CNN Approach for 6D Object Pose Estimation from RGB-D Images *

S Fu, Q Zhang, X Sun, M Liu… - 2023 IEEE International …, 2023 - ieeexplore.ieee.org
Estimating the 6D pose presents a formidable challenge due to the intricate nature of world
objects and the myriad of issues encountered when acquiring data from real-world scenes …

Indoor Semantic Scene Understanding Using 2D-3D Fusion

M Gopinathan, G Truong… - 2021 Digital Image …, 2021 - ieeexplore.ieee.org
Seamless Human-Robot Interaction is the ultimate goal of developing service robotic
systems. For this, the robotic agents have to understand their surroundings to better …

Active 6d multi-object pose estimation in cluttered scenarios with deep reinforcement learning

J Sock, G Garcia-Hernando… - 2020 IEEE/RSJ …, 2020 - ieeexplore.ieee.org
In this work, we explore how a strategic selection of camera movements can facilitate the
task of 6D multi-object pose estimation in cluttered scenarios while respecting real-world …

Sparsefusion3d: Sparse sensor fusion for 3d object detection by radar and camera in environmental perception

Z Yu, W Wan, M Ren, X Zheng… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
In the context of autonomous driving environment perception, multi-modal fusion plays a
pivotal role in enhancing robustness, completeness, and accuracy, thereby extending the …

IFGAN—A Novel Image Fusion Model to Fuse 3D Point Cloud Sensory Data

HA Ignatious, H El-Sayed, S Bouktif - Journal of Sensor and Actuator …, 2024 - mdpi.com
To enhance the level of autonomy in driving, it is crucial to ensure optimal execution of
critical maneuvers in all situations. However, numerous accidents involving autonomous …

PillarFlowNet: A real-time deep multitask network for LiDAR-based 3D object detection and scene flow estimation

F Duffhauss, SA Baur - 2020 IEEE/RSJ International …, 2020 - ieeexplore.ieee.org
Mobile robotic platforms require a precise understanding about other agents in their
surroundings as well as their respective motion in order to operate safely. Scene flow in …

CMA: Cross-modal attention for 6D object pose estimation

L Zou, Z Huang, F Wang, Z Yang, G Wang - Computers & Graphics, 2021 - Elsevier
Deep learning methods for 6D object pose estimation based on RGB and depth (RGB-D)
images have been successfully applied to robotic manipulation and grasping. Among these …