作者
Feiran Huang, Kaimin Wei, Jian Weng, Zhoujun Li
发表日期
2020/7/5
期刊
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)
卷号
16
期号
3
页码范围
1-19
出版商
ACM
简介
Sentiment analysis of social multimedia data has attracted extensive research interest and has been applied to many tasks, such as election prediction and products evaluation. Sentiment analysis of one modality (e.g., text or image) has been broadly studied. However, not much attention has been paid to the sentiment analysis of multimodal data. Different modalities usually have information that is complementary. Thus, it is necessary to learn the overall sentiment by combining the visual content with text description. In this article, we propose a novel method—Attention-Based Modality-Gated Networks (AMGN)—to exploit the correlation between the modalities of images and texts and extract the discriminative features for multimodal sentiment analysis. Specifically, a visual-semantic attention model is proposed to learn attended visual features for each word. To effectively combine the sentiment information on the …
引用总数
20202021202220232024214183212
学术搜索中的文章
F Huang, K Wei, J Weng, Z Li - ACM Transactions on Multimedia Computing …, 2020