作者
Hatice Gunes, Massimo Piccardi, Maja Pantic
发表日期
2008
图书
Affective computing: focus on emotion expression, synthesis, and recognition
页码范围
185-218
出版商
IntechOpen
简介
Human affect sensing can be obtained from a broad range of behavioral cues and signals that are available via visual, acoustic, and tactual expressions or presentations of emotions. Affective states can thus be recognized from visible/external signals such as gestures (eg, facial expressions, body gestures, head movements, etc.), and speech (eg, parameters such as pitch, energy, frequency and duration), or invisible/internal signals such as physiological signals (eg, heart rate, skin conductivity, salivation, etc.), brain and scalp signals, and thermal infrared imagery.
Despite the available range of cues and modalities in human-human interaction (HHI), the mainstream emotion research has mostly focused on facial expressions (Hadjikhani & De Gelder, 2003). In line with this, most of the past research on affect sensing and recognition has also focused on facial expressions and on data that has been posed on demand or acquired in laboratory settings. Additionally, each sense such as vision, hearing, and touch has been considered in isolation. However, natural human-human interaction is multimodal and not occurring in predetermined, restricted and controlled settings. In the day-to-day world people do not present themselves to others as voice-or body-less faces or face-or body-less voices (Walker-Andrews, 1997). Moreover, the available emotional signals such as facial expression, head movement, hand gestures, and voice are unified in space and time (see Figure 1). They inherently share the same spatial location, and their occurrences are temporally synchronized. Cognitive neuroscience research thus claims that information coming from …
引用总数
20082009201020112012201320142015201620172018201920202021202220232024179920951651041036641
学术搜索中的文章
H Gunes, M Piccardi, M Pantic - Affective computing: focus on emotion expression …, 2008