Retrieval-augmented generation for large language models: A survey

Y Gao, Y Xiong, X Gao, K Jia, J Pan, Y Bi, Y Dai… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) demonstrate powerful capabilities, but they still face
challenges in practical applications, such as hallucinations, slow knowledge updates, and …

Don't Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration

S Feng, W Shi, Y Wang, W Ding… - arXiv preprint arXiv …, 2024 - arxiv.org
Despite efforts to expand the knowledge of large language models (LLMs), knowledge gaps-
-missing or outdated information in LLMs--might always persist given the evolving nature of …

Dell: Generating reactions and explanations for llm-based misinformation detection

H Wan, S Feng, Z Tan, H Wang, Y Tsvetkov… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models are limited by challenges in factuality and hallucinations to be
directly employed off-the-shelf for judging the veracity of news articles, where factual …

Grounding and Evaluation for Large Language Models: Practical Challenges and Lessons Learned (Survey)

K Kenthapadi, M Sameki, A Taly - arXiv preprint arXiv:2407.12858, 2024 - arxiv.org
With the ongoing rapid adoption of Artificial Intelligence (AI)-based systems in high-stakes
domains, ensuring the trustworthiness, safety, and observability of these systems has …

Language Modeling with Editable External Knowledge

BZ Li, E Liu, A Ross, A Zeitoun, G Neubig… - arXiv preprint arXiv …, 2024 - arxiv.org
When the world changes, so does the text that humans write about it. How do we build
language models that can be easily updated to reflect these changes? One popular …

Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting

Z Wang, Z Wang, L Le, HS Zheng, S Mishra… - arXiv preprint arXiv …, 2024 - arxiv.org
Retrieval augmented generation (RAG) combines the generative abilities of large language
models (LLMs) with external knowledge sources to provide more accurate and up-to-date …

Synchronous Faithfulness Monitoring for Trustworthy Retrieval-Augmented Generation

D Wu, JC Gu, F Yin, N Peng, KW Chang - arXiv preprint arXiv:2406.13692, 2024 - arxiv.org
Retrieval-augmented language models (RALMs) have shown strong performance and wide
applicability in knowledge-intensive tasks. However, there are significant trustworthiness …

: Enhancing Retriever Generalization for Scientific Domain through Complementary Granularity

F Cai, X Zhao, T Chen, S Chen, H Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Recent studies show the growing significance of document retrieval in the generation of
LLMs, ie, RAG, within the scientific domain by bridging their knowledge gap. However …

Thread: A Logic-Based Data Organization Paradigm for How-To Question Answering with Retrieval Augmented Generation

K An, F Yang, L Li, J Lu, S Cheng, L Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Current question answering systems leveraging retrieval augmented generation perform
well in answering factoid questions but face challenges with non-factoid questions …

AGRaME: Any-Granularity Ranking with Multi-Vector Embeddings

RG Reddy, O Attia, Y Li, H Ji, S Potdar - arXiv preprint arXiv:2405.15028, 2024 - arxiv.org
Ranking is a fundamental and popular problem in search. However, existing ranking
algorithms usually restrict the granularity of ranking to full passages or require a specific …