Improving LLM Generations via Fine-Grained Self-Endorsement

A Wang, L Song, B Peng, L Jin, Y Tian… - Findings of the …, 2024 - aclanthology.org
This work studies mitigating fact-conflicting hallucinations for large language model (LLM) at
inference time. Particularly, we propose a self-endorsement framework that leverages the …

REAL Sampling: Boosting Factuality and Diversity of Open-Ended Generation via Asymptotic Entropy

HS Chang, N Peng, M Bansal, A Ramakrishna… - arXiv preprint arXiv …, 2024 - arxiv.org
Decoding methods for large language models (LLMs) usually struggle with the tradeoff
between ensuring factuality and maintaining diversity. For example, a higher p threshold in …

[PDF][PDF] Make Every Token Count: A Systematic Survey on Decoding Methods for Foundation Models

H Wang, K Shu - researchgate.net
Foundation models, such as large language models (LLMs) and large vision-language
models (LVLMs), have gained significant attention for their remarkable performance across …