Vision-based holistic scene understanding towards proactive human–robot collaboration

J Fan, P Zheng, S Li - Robotics and Computer-Integrated Manufacturing, 2022 - Elsevier
Recently human–robot collaboration (HRC) has emerged as a promising paradigm for mass
personalization in manufacturing owing to the potential to fully exploit the strength of human …

Artificial intelligence in the agri-food system: Rethinking sustainable business models in the COVID-19 scenario

A Di Vaio, F Boccia, L Landriani, R Palladino - Sustainability, 2020 - mdpi.com
The aim of the paper is to investigate the artificial intelligence (AI) function in agri-food
industry, as well as the role of stakeholders in its supply chain. Above all, from the beginning …

Can an Embodied Agent Find Your “Cat-shaped Mug”? LLM-Based Zero-Shot Object Navigation

VS Dorbala, JF Mullen Jr… - IEEE Robotics and …, 2023 - ieeexplore.ieee.org
We present language-guided exploration (LGX), a novel algorithm for Language-Driven
Zero-Shot Object Goal Navigation (L-ZSON), where an embodied agent navigates to an …

Performing predefined tasks using the human–robot interaction on speech recognition for an industrial robot

MC Bingol, O Aydogmus - Engineering Applications of Artificial Intelligence, 2020 - Elsevier
People who are not experts in robotics can easily implement complex robotic applications by
using human–robot interaction (HRI). HRI systems require many complex operations such …

Vision-based navigation with language-based assistance via imitation learning with indirect intervention

K Nguyen, D Dey, C Brockett… - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
Abstract We present Vision-based Navigation with Language-based Assistance (VNLA), a
grounded vision-language task where an agent with visual perception is guided via …

Assister: Assistive navigation via conditional instruction generation

Z Huang, Z Shangguan, J Zhang, G Bar, M Boyd… - … on Computer Vision, 2022 - Springer
We introduce a novel vision-and-language navigation (VLN) task of learning to provide real-
time guidance to a blind follower situated in complex dynamic navigation scenarios …

Interactive navigation in environments with traversable obstacles using large language and vision-language models

Z Zhang, A Lin, CW Wong, X Chu… - … on Robotics and …, 2024 - ieeexplore.ieee.org
This paper proposes an interactive navigation framework by using large language and
vision-language models, allowing robots to navigate in environments with traversable …

Reve-ce: Remote embodied visual referring expression in continuous environment

X Li, D Guo, H Liu, F Sun - IEEE Robotics and Automation …, 2022 - ieeexplore.ieee.org
Ithas always been a great challenge for the robot to navigate in the visual world following
natural language instructions. Recently, several tasks such as the Vision-and-Language …

Talk to the vehicle: Language conditioned autonomous navigation of self driving cars

NN Sriram, T Maniar… - 2019 IEEE/RSJ …, 2019 - ieeexplore.ieee.org
We propose a novel pipeline that blends encodings from natural language and 3D semantic
maps obtained from visual imagery to generate local trajectories that are executed by a low …

A multi-granularity scene segmentation network for human-robot collaboration environment perception

J Fan, P Zheng, CKM Lee - 2022 IEEE/RSJ International …, 2022 - ieeexplore.ieee.org
Human-robot collaboration (HRC) has been considered as a promising paradigm towards
futuristic human-centric smart manufacturing, to meet the thriving needs of mass …