作者
Soujanya Poria, Erik Cambria, Newton Howard, Guang-Bin Huang, Amir Hussain
发表日期
2016/1/22
期刊
Neurocomputing
卷号
174
页码范围
50-59
出版商
Elsevier
简介
A huge number of videos are posted every day on social media platforms such as Facebook and YouTube. This makes the Internet an unlimited source of information. In the coming decades, coping with such information and mining useful knowledge from it will be an increasingly difficult task. In this paper, we propose a novel methodology for multimodal sentiment analysis, which consists in harvesting sentiments from Web videos by demonstrating a model that uses audio, visual and textual modalities as sources of information. We used both feature- and decision-level fusion methods to merge affective information extracted from multiple modalities. A thorough comparison with existing works in this area is carried out throughout the paper, which demonstrates the novelty of our approach. Preliminary comparative experiments with the YouTube dataset show that the proposed multimodal system achieves an accuracy …
引用总数
201520162017201820192020202120222023202412467867728671685722
学术搜索中的文章
S Poria, A Hussain, E Cambria, S Poria, A Hussain… - Multimodal sentiment analysis, 2018