r/fakeddit: A new multimodal benchmark dataset for fine-grained fake news detection

K Nakamura, S Levy, WY Wang - arXiv preprint arXiv:1911.03854, 2019 - arxiv.org
arXiv preprint arXiv:1911.03854, 2019arxiv.org
Fake news has altered society in negative ways in politics and culture. It has adversely
affected both online social network systems as well as offline communities and
conversations. Using automatic machine learning classification models is an efficient way to
combat the widespread dissemination of fake news. However, a lack of effective,
comprehensive datasets has been a problem for fake news research and detection model
development. Prior fake news datasets do not provide multimodal text and image data …
Fake news has altered society in negative ways in politics and culture. It has adversely affected both online social network systems as well as offline communities and conversations. Using automatic machine learning classification models is an efficient way to combat the widespread dissemination of fake news. However, a lack of effective, comprehensive datasets has been a problem for fake news research and detection model development. Prior fake news datasets do not provide multimodal text and image data, metadata, comment data, and fine-grained fake news categorization at the scale and breadth of our dataset. We present Fakeddit, a novel multimodal dataset consisting of over 1 million samples from multiple categories of fake news. After being processed through several stages of review, the samples are labeled according to 2-way, 3-way, and 6-way classification categories through distant supervision. We construct hybrid text+image models and perform extensive experiments for multiple variations of classification, demonstrating the importance of the novel aspect of multimodality and fine-grained classification unique to Fakeddit.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果