作者
Katharina von Kriegstein, Özgür Dogan, Martina Grüter, Anne-Lise Giraud, Christian A Kell, Thomas Grüter, Andreas Kleinschmidt, Stefan J Kiebel
发表日期
2008/5/6
期刊
Proceedings of the National Academy of Sciences
卷号
105
期号
18
页码范围
6747-6752
出版商
National Academy of Sciences
简介
Human face-to-face communication is essentially audiovisual. Typically, people talk to us face-to-face, providing concurrent auditory and visual input. Understanding someone is easier when there is visual input, because visual cues like mouth and tongue movements provide complementary information about speech content. Here, we hypothesized that, even in the absence of visual input, the brain optimizes both auditory-only speech and speaker recognition by harvesting speaker-specific predictions and constraints from distinct visual face-processing areas. To test this hypothesis, we performed behavioral and neuroimaging experiments in two groups: subjects with a face recognition deficit (prosopagnosia) and matched controls. The results show that observing a specific person talking for 2 min improves subsequent auditory-only speech and speaker recognition for this person. In both prosopagnosics and …
引用总数
20082009201020112012201320142015201620172018201920202021202220232024158171251461121101215172157
学术搜索中的文章
K von Kriegstein, Ö Dogan, M Grüter, AL Giraud… - Proceedings of the National Academy of Sciences, 2008