作者
Soubhik Barari, Christopher Lucas, Kevin Munger
发表日期
2021/1/13
期刊
OSF Preprints
卷号
13
简介
We demonstrate that political misinformation in the form of videos synthesized by deep learning (“deepfakes”) can convince the American public of scandals that never occurred at alarming rates–nearly 50% of a representative sample–but no more so than equivalent misinformation conveyed through existing news formats like textual headlines or audio recordings. Similarly, we confirm that motivated reasoning about the deepfake target’s identity (eg, partisanship or gender) plays a key role in facilitating persuasion, but, again, no more so than via existing news formats. In fact, when asked to discern real videos from deepfakes, partisan motivated reasoning explains a massive gap in viewers’ detection accuracy, but only for real videos, not deepfakes. Our findings come from a nationally representative sample of 5,750 subjects’ participation in two news feed experiments with exposure to a novel collection of realistic deepfakes created in collaboration with industry partners. Finally, a series of randomized interventions reveal that brief but specific informational treatments about deepfakes only sometimes attenuate deepfakes’ effects and in relatively small scale. Above all else, broad literacy in politics and digital technology most strongly increases discernment between deepfakes and authentic videos of political elites.
引用总数