B Hu, Y Chen, E Keogh - Proceedings of the 2013 SIAM international …, 2013 - SIAM
Most literature on time series classification assumes that the beginning and ending points of the pattern of interest can be correctly identified, both during the training phase and later …
M Ramanathan, WY Yau… - IEEE Transactions on …, 2014 - ieeexplore.ieee.org
Given a video sequence, the task of action recognition is to identify the most similar action among the action sequences learned by the system. Such human action recognition is …
K Cho, X Chen - … Conference on Computer Vision Theory and …, 2014 - ieeexplore.ieee.org
The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a …
In this paper we introduce a novel method for action/movement recognition in motion capture data. The joints orientation angles and the forward differences of these angles in …
Action and gesture recognition from motion capture and RGB-D camera sequences has recently emerged as a renowned and challenging research topic. The current methods can …
B Hu, Y Chen, E Keogh - Data mining and knowledge discovery, 2016 - Springer
Much of the vast literature on time series classification makes several assumptions about data and the algorithm's eventual deployment that are almost certainly unwarranted. For …
C Youssef - Pattern Recognition Letters, 2016 - Elsevier
Action recognition based on the 3D coordinates of body skeleton joints is an important topic in computer vision applications and human robot interaction. At present, most 3D data are …
X Chen, M Koskela - … Analysis: 18th Scandinavian Conference, SCIA 2013 …, 2013 - Springer
In this paper we present a robust motion recognition framework for both motion capture and RGB-D sensor data. We extract four different types of features and apply a temporal …
This article describes a novel approach to the modeling of human actions in 3D. The method we propose is based on a “bag of poses” model that represents human actions as …