AI hallucinations: a misnomer worth clarifying

N Maleki, B Padmanabhan… - 2024 IEEE conference on …, 2024 - ieeexplore.ieee.org
As large language models continue to advance in Artificial Intelligence (AI), text generation
systems have been shown to suffer from a problematic phenomenon often termed as" …

Rationalization for explainable NLP: a survey

S Gurrapu, A Kulkarni, L Huang… - Frontiers in Artificial …, 2023 - frontiersin.org
Recent advances in deep learning have improved the performance of many Natural
Language Processing (NLP) tasks such as translation, question-answering, and text …

End-to-end multimodal fact-checking and explanation generation: A challenging dataset and models

BM Yao, A Shah, L Sun, JH Cho, L Huang - Proceedings of the 46th …, 2023 - dl.acm.org
We propose end-to-end multimodal fact-checking and explanation generation, where the
input is a claim and a large collection of web sources, including articles, images, videos, and …

Generative large language models in automated fact-checking: A survey

I Vykopal, M Pikuliak, S Ostermann, M Šimko - arXiv preprint arXiv …, 2024 - arxiv.org
The dissemination of false information on online platforms presents a serious societal
challenge. While manual fact-checking remains crucial, Large Language Models (LLMs) …

From outputs to insights: a survey of rationalization approaches for explainable text classification

E Mendez Guzman, V Schlegel… - Frontiers in Artificial …, 2024 - frontiersin.org
Deep learning models have achieved state-of-the-art performance for text classification in
the last two decades. However, this has come at the expense of models becoming less …