作者
Hatice Gunes, Massimo Piccardi
发表日期
2005/10/12
研讨会论文
2005 IEEE international conference on systems, man and cybernetics
卷号
4
页码范围
3437-3443
出版商
IEEE
简介
This paper presents an approach to automatic visual emotion recognition from two modalities: face and body. Firstly, individual classifiers are trained from individual modalities. Secondly, we fuse facial expression and affective body gesture information first at a feature-level, in which the data from both modalities are combined before classification, and later at a decision-level, in which we integrate the outputs of the monomodal systems by the use of suitable criteria. We then evaluate these two fusion approaches, in terms of performance over monomodal emotion recognition based on facial expression modality only. In the experiments performed the emotion classification using the two modalities achieved a better recognition accuracy outperforming the classification using the individual facial modality. Moreover, fusion at the feature-level proved better recognition than fusion at the decision-level.
引用总数
200620072008200920102011201220132014201520162017201820192020202120222023202443279778922111118272723252511
学术搜索中的文章
H Gunes, M Piccardi - 2005 IEEE international conference on systems, man …, 2005