Learning fine-grained cross modality excitement for speech emotion recognition

H Li, W Ding, Z Wu, Z Liu - arXiv preprint arXiv:2010.12733, 2020 - arxiv.org
H Li, W Ding, Z Wu, Z Liu
arXiv preprint arXiv:2010.12733, 2020arxiv.org
Speech emotion recognition is a challenging task because the emotion expression is
complex, multimodal and fine-grained. In this paper, we propose a novel multimodal deep
learning approach to perform fine-grained emotion recognition from real-life speeches. We
design a temporal alignment mean-max pooling mechanism to capture the subtle and fine-
grained emotions implied in every utterance. In addition, we propose a cross modality
excitement module to conduct sample-specific adjustment on cross modality embeddings …
Speech emotion recognition is a challenging task because the emotion expression is complex, multimodal and fine-grained. In this paper, we propose a novel multimodal deep learning approach to perform fine-grained emotion recognition from real-life speeches. We design a temporal alignment mean-max pooling mechanism to capture the subtle and fine-grained emotions implied in every utterance. In addition, we propose a cross modality excitement module to conduct sample-specific adjustment on cross modality embeddings and adaptively recalibrate the corresponding values by its aligned latent features from the other modality. Our proposed model is evaluated on two well-known real-world speech emotion recognition datasets. The results demonstrate that our approach is superior on the prediction tasks for multimodal speech utterances, and it outperforms a wide range of baselines in terms of prediction accuracy. Further more, we conduct detailed ablation studies to show that our temporal alignment mean-max pooling mechanism and cross modality excitement significantly contribute to the promising results. In order to encourage the research reproducibility, we make the code publicly available at \url{https://github.com/tal-ai/FG_CME.git}.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果