X Zhao, J Yu, Z Liu, J Wang,
D Li, Y Chen,
B Hu… - arXiv preprint arXiv …, 2024 - arxiv.org
As we all know, hallucinations prevail in Large Language Models (LLMs), where the
generated content is coherent but factually incorrect, which inflicts a heavy blow on the …