Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - AI Magazine, 2024 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …

Cognitive mirage: A review of hallucinations in large language models

H Ye, T Liu, A Zhang, W Hua, W Jia - arXiv preprint arXiv:2309.06794, 2023 - arxiv.org
As large language models continue to develop in the field of AI, text generation systems are
susceptible to a worrisome phenomenon known as hallucination. In this study, we …

Siren's song in the AI ocean: a survey on hallucination in large language models

Y Zhang, Y Li, L Cui, D Cai, L Liu, T Fu… - arXiv preprint arXiv …, 2023 - arxiv.org
While large language models (LLMs) have demonstrated remarkable capabilities across a
range of downstream tasks, a significant concern revolves around their propensity to exhibit …

Chain-of-verification reduces hallucination in large language models

S Dhuliawala, M Komeili, J Xu, R Raileanu, X Li… - arXiv preprint arXiv …, 2023 - arxiv.org
Generation of plausible yet incorrect factual information, termed hallucination, is an
unsolved issue in large language models. We study the ability of language models to …

Large legal fictions: Profiling legal hallucinations in large language models

M Dahl, V Magesh, M Suzgun… - Journal of Legal Analysis, 2024 - academic.oup.com
Do large language models (LLMs) know the law? LLMs are increasingly being used to
augment legal practice, education, and research, yet their revolutionary potential is …

Fine-tuning language models for factuality

K Tian, E Mitchell, H Yao, CD Manning… - arXiv preprint arXiv …, 2023 - arxiv.org
The fluency and creativity of large pre-trained language models (LLMs) have led to their
widespread use, sometimes even as a replacement for traditional search engines. Yet …

Knowledge conflicts for llms: A survey

R Xu, Z Qi, Z Guo, C Wang, H Wang, Y Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
This survey provides an in-depth analysis of knowledge conflicts for large language models
(LLMs), highlighting the complex challenges they encounter when blending contextual and …

Ragtruth: A hallucination corpus for developing trustworthy retrieval-augmented language models

C Niu, Y Wu, J Zhu, S Xu, K Shum, R Zhong… - arXiv preprint arXiv …, 2023 - arxiv.org
Retrieval-augmented generation (RAG) has become a main technique for alleviating
hallucinations in large language models (LLMs). Despite the integration of RAG, LLMs may …

Calibrated language models must hallucinate

AT Kalai, SS Vempala - Proceedings of the 56th Annual ACM …, 2024 - dl.acm.org
Recent language models generate false but plausible-sounding text with surprising
frequency. Such “hallucinations” are an obstacle to the usability of language-based AI …

Expertqa: Expert-curated questions and attributed answers

C Malaviya, S Lee, S Chen, E Sieber, M Yatskar… - arXiv preprint arXiv …, 2023 - arxiv.org
As language models are adapted by a more sophisticated and diverse set of users, the
importance of guaranteeing that they provide factually correct information supported by …