Quantifying facial expression intensity and signal use in deaf signers

C Stoll, H Rodger, J Lao, AR Richoz… - The Journal of Deaf …, 2019 - academic.oup.com
The Journal of Deaf Studies and Deaf Education, 2019academic.oup.com
We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and
effectively integrate multimodal signals to decode biologically relevant facial expressions of
emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the
absence of an auditory sensory channel. We thus compared early and profoundly deaf
signers (n= 46) with hearing nonsigners (n= 48) on a psychophysical task designed to
quantify their recognition performance for the six basic facial expressions of emotion. Using …
Abstract
We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.
Oxford University Press
以上显示的是最相近的搜索结果。 查看全部搜索结果