Vision-based human action recognition: An overview and real world challenges

I Jegham, AB Khalifa, I Alouani, MA Mahjoub - Forensic Science …, 2020 - Elsevier
Within a large range of applications in computer vision, Human Action Recognition has
become one of the most attractive research fields. Ambiguities in recognizing actions does …

Deep convolutional neural networks for human action recognition using depth maps and postures

A Kamel, B Sheng, P Yang, P Li… - IEEE Transactions on …, 2018 - ieeexplore.ieee.org
In this paper, we present a method (Action-Fusion) for human action recognition from depth
maps and posture data using convolutional neural networks (CNNs). Two input descriptors …

A Critical Analysis on Machine Learning Techniques for Video-based Human Activity Recognition of Surveillance Systems: A Review

S Jahan, MR Islam - arXiv preprint arXiv:2409.00731, 2024 - arxiv.org
Upsurging abnormal activities in crowded locations such as airports, train stations, bus
stops, shopping malls, etc., urges the necessity for an intelligent surveillance system. An …

Natural language acquisition and grounding for embodied robotic systems

M Alomari, P Duckworth, D Hogg, A Cohn - Proceedings of the AAAI …, 2017 - ojs.aaai.org
We present a cognitively plausible novel framework capable of learning the grounding in
visual semantics and the grammar of natural language commands given to a robot in a table …

Multiple stream deep learning model for human action recognition

Y Gu, X Ye, W Sheng, Y Ou, Y Li - Image and Vision Computing, 2020 - Elsevier
Human action recognition is one of the most important and challenging topic in the fields of
image processing. Unlike object recognition, action recognition requires motion feature …

[PDF][PDF] Qsrlib: a software library for online acquisition of qualitative spatial relations from video

Y Gatsoulis, M Alomari, C Burbridge… - … Reasoning (QR16), at …, 2016 - academia.edu
There is increasing interest in using Qualitative Spatial Relations as a formalism to abstract
from noisy and large amounts of video data in order to form high level conceptualisations, eg …

[HTML][HTML] Unsupervised human activity analysis for intelligent mobile robots

P Duckworth, DC Hogg, AG Cohn - Artificial Intelligence, 2019 - Elsevier
The success of intelligent mobile robots operating and collaborating with humans in daily
living environments depends on their ability to generalise and learn human movements, and …

Toward holistic scene understanding: A transfer of human scene perception to mobile robots

F Graf, J Lindermayr, Ç Odabaşi… - IEEE Robotics & …, 2022 - ieeexplore.ieee.org
The long-term vision for robotics is to have fully autonomous mobile robots that perceive the
environment as humans do or even better. This article transfers the core ideas from human …

The multi-angle extended three-dimensional activities (META) stimulus set: A tool for studying event cognition

MA Bezdek, TT Nguyen, CS Hall, TS Braver… - Behavior Research …, 2023 - Springer
To study complex human activity and how it is perceived and remembered, it is valuable to
have large-scale, well-characterized stimuli that are representative of such activity. We …

Latent dirichlet allocation for unsupervised activity analysis on an autonomous mobile robot

P Duckworth, M Alomari, J Charles, D Hogg… - Proceedings of the AAAI …, 2017 - ojs.aaai.org
For autonomous robots to collaborate on joint tasks with humans they require a shared
understanding of an observed scene. We present a method for unsupervised learning of …