Factuality challenges in the era of large language models and opportunities for fact-checking

I Augenstein, T Baldwin, M Cha… - Nature Machine …, 2024 - nature.com
The emergence of tools based on large language models (LLMs), such as OpenAI's
ChatGPT and Google's Gemini, has garnered immense public attention owing to their …

Multi-hop question answering

V Mavi, A Jangra, A Jatowt - Foundations and Trends® in …, 2024 - nowpublishers.com
Abstract The task of Question Answering (QA) has attracted significant research interest for a
long time. Its relevance to language understanding and knowledge retrieval tasks, along …

Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration

S Feng, W Shi, Y Wang, W Ding… - arXiv preprint arXiv …, 2024 - arxiv.org
Despite efforts to expand the knowledge of large language models (LLMs), knowledge gaps-
-missing or outdated information in LLMs--might always persist given the evolving nature of …

Modular pluralism: Pluralistic alignment via multi-llm collaboration

S Feng, T Sorensen, Y Liu, J Fisher, CY Park… - arXiv preprint arXiv …, 2024 - arxiv.org
While existing alignment paradigms have been integral in developing large language
models (LLMs), LLMs often learn an averaged human preference and struggle to model …

Usable XAI: 10 strategies towards exploiting explainability in the LLM era

X Wu, H Zhao, Y Zhu, Y Shi, F Yang, T Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Explainable AI (XAI) refers to techniques that provide human-understandable insights into
the workings of AI models. Recently, the focus of XAI is being extended towards Large …

Dell: Generating reactions and explanations for llm-based misinformation detection

H Wan, S Feng, Z Tan, H Wang, Y Tsvetkov… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models are limited by challenges in factuality and hallucinations to be
directly employed off-the-shelf for judging the veracity of news articles, where factual …

Mitigating hallucination in fictional character role-play

N Sadeq, Z Xie, B Kang, P Lamba, X Gao… - arXiv preprint arXiv …, 2024 - arxiv.org
Role-playing has wide-ranging applications in customer support, embodied agents,
computational social science, etc. The influence of parametric world knowledge of large …

Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting

Z Wang, Z Wang, L Le, HS Zheng, S Mishra… - arXiv preprint arXiv …, 2024 - arxiv.org
Retrieval augmented generation (RAG) combines the generative abilities of large language
models (LLMs) with external knowledge sources to provide more accurate and up-to-date …

Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely

S Zhao, Y Yang, Z Wang, Z He, LK Qiu… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) augmented with external data have demonstrated
remarkable capabilities in completing real-world tasks. Techniques for integrating external …

Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts

J Kang, L Karlinsky, H Luo, Z Wang, J Hansen… - arXiv preprint arXiv …, 2024 - arxiv.org
We present Self-MoE, an approach that transforms a monolithic LLM into a compositional,
modular system of self-specialized experts, named MiXSE (MiXture of Self-specialized …