Exclaim: Explainable neural claim verification using rationalization

S Gurrapu, L Huang, FA Batarseh - 2022 IEEE 29th Annual …, 2022 - ieeexplore.ieee.org
2022 IEEE 29th Annual Software Technology Conference (STC), 2022ieeexplore.ieee.org
With the advent of deep learning, text generation language models have improved
dramatically, with text at a similar level as human-written text. This can lead to rampant
misinformation because content can now be created cheaply and distributed quickly.
Automated claim verification methods exist to validate claims, but they lack foundational data
and often use mainstream news as evidence sources that are strongly biased towards a
specific agenda. Current claim verification methods use deep neural network models and …
With the advent of deep learning, text generation language models have improved dramatically, with text at a similar level as human-written text. This can lead to rampant misinformation because content can now be created cheaply and distributed quickly. Automated claim verification methods exist to validate claims, but they lack foundational data and often use mainstream news as evidence sources that are strongly biased towards a specific agenda. Current claim verification methods use deep neural network models and complex algorithms for a high classification accuracy but it is at the expense of model explainability. The models are black-boxes and their decision-making process and the steps it took to arrive at a final prediction are obfuscated from the user. We introduce a novel claim verification approach, namely: ExClaim, that attempts to provide an explainable claim verification system with foundational evidence. Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim and justifies the verdict through a natural language explanation (rationale) to describe the model’s decision-making process. ExClaim treats the verdict classification task as a question-answer problem and achieves a performance of 0.93 F1 score. It provides subtasks explanations to also justify the intermediate outcomes. Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes. Ensuring claim verification systems are assured, rational, and explainable is an essential step toward improving Human-AI trust and the accessibility of black-box systems.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果