Zero-resource Hallucination Detection for Text Generation via Graph-based Contextual Knowledge Triples Modeling

X Fang, Z Huang, Z Tian, M Fang, Z Pan… - arXiv preprint arXiv …, 2024 - arxiv.org
LLMs obtain remarkable performance but suffer from hallucinations. Most research on
detecting hallucination focuses on the questions with short and concrete correct answers …

DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation

M Yang, Y Chen, Y Liu, L Shi - Proceedings of the 33rd ACM SIGSOFT …, 2024 - dl.acm.org
Large Language Models (LLMs) have showcased their remarkable capabilities in diverse
domains, encompassing natural language understanding, translation, and even code …

DefAn: Definitive Answer Dataset for LLMs Hallucination Evaluation

ABM Rahman, S Anwar, M Usman, A Mian - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have demonstrated remarkable capabilities, revolutionizing
the integration of AI in daily life applications. However, they are prone to hallucinations …