作者
Seunghyun Yoon, Seokhyun Byun, Subhadeep Dey, Kyomin Jung
发表日期
2019/5/12
研讨会论文
ICASSP 2019-2019 IEEE International conference on acoustics, speech and signal processing (ICASSP)
页码范围
2822-2826
出版商
IEEE
简介
In this paper, we are interested in exploiting textual and acoustic data of an utterance for the speech emotion classification task. The baseline approach models the information from audio and text independently using two deep neural networks (DNNs). The outputs from both the DNNs are then fused for classification. As opposed to using knowledge from both the modalities separately, we propose a framework to exploit acoustic information in tandem with lexical data. The proposed framework uses two bi-directional long short-term memory (BLSTM) for obtaining hidden representations of the utterance. Furthermore, we propose an attention mechanism, referred to as the multi-hop, which is trained to automatically infer the correlation between the modalities. The multi-hop attention first computes the relevant segments of the textual data corresponding to the audio signal. The relevant textual data is then applied to …
引用总数
201820192020202120222023202419243140317
学术搜索中的文章
S Yoon, S Byun, S Dey, K Jung - ICASSP 2019-2019 IEEE International conference on …, 2019