Abstract The integration of Deep Learning (DL) and the Internet of Things (IoT) has revolutionized technology in the twenty-first century, enabling humans and machines to …
During the approximately 18–32 thousand years of domestication [1], dogs and humans have shared a similar social environment [2]. Dog and human vocalizations are thus familiar …
Voices carry large amounts of socially relevant information on persons, much like 'auditory faces'. Following Bruce and Young (1986)'s seminal model of face perception, we propose …
K Johnson, MJ Sjerps - The handbook of speech perception, 2021 - Wiley Online Library
Speech produced by different people varies acoustically because of individual differences in vocal tract physiology. This acoustic variation in the “same” words of language presents a …
J Campbell, A Sharma - PloS one, 2014 - journals.plos.org
Cortical cross-modal re-organization, or recruitment of auditory cortical areas for visual processing, has been well-documented in deafness. However, the degree of sensory …
Purpose To determine the relative importance of acoustic parameters (fundamental frequency [F0], formant frequencies [FFs], aperiodicity, and spectrum level [SL]) on voice …
Listeners exploit small interindividual variations around a generic acoustical structure to discriminate and identify individuals from their voice—a key requirement for social …
H Blank, A Anwander… - Journal of Neuroscience, 2011 - Soc Neuroscience
Currently, there are two opposing models for how voice and face information is integrated in the human brain to recognize person identity. The conventional model assumes that voice …
Social animals must detect, evaluate and respond to the emotional states of other individuals in their group. A constellation of gestures, vocalizations, and chemosignals …