This survey summarises the most recent methods for building and assessing helpful, honest, and harmless neural language models, considering small, medium, and large-size models …
Y Zhang, L Cui, W Bi, S Shi - arXiv preprint arXiv:2312.15710, 2023 - arxiv.org
Despite their impressive capabilities, large language models (LLMs) have been observed to generate responses that include inaccurate or fabricated information, a phenomenon …
X Yu, Y Wang, Y Chen, Z Tao, D Xi, S Song… - arXiv preprint arXiv …, 2024 - arxiv.org
In recent years, generative artificial intelligence models, represented by Large Language Models (LLMs) and Diffusion Models (DMs), have revolutionized content production …
Large language models (LLMs) often require task-relevant knowledge to augment their internal knowledge through prompts. However, simply injecting external knowledge into …
P Sahoo, P Meharia, A Ghosh, S Saha… - Findings of the …, 2024 - aclanthology.org
The rapid advancement of foundation models (FMs) across language, image, audio, and video domains has shown remarkable capabilities in diverse tasks. However, the …
DM Park, HJ Lee - Informatization Policy, 2024 - koreascience.kr
Hallucination is a significant barrier to the utilization of large-scale language models or multimodal models. In this study, we collected 654 computer science papers with" …
Federated knowledge graph reasoning (FedKGR) aims to perform reasoning over different clients while protecting data privacy, drawing increasing attention to its high practical value …
Y Yang, J Chen, Y Xiang - World Wide Web, 2025 - Springer
Abstract Knowledge graphs manage and organize data and information in a structured form, which can provide effective support for various applications and services. Only reliable …
Hallucinations pose a significant challenge to the reliability and alignment of Large Language Models (LLMs), limiting their widespread acceptance beyond chatbot …