Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - arXiv preprint arXiv:2311.05656, 2023 - arxiv.org
Misinformation such as fake news and rumors is a serious threat on information ecosystems
and public trust. The emergence of Large Language Models (LLMs) has great potential to …

Large language models for information retrieval: A survey

Y Zhu, H Yuan, S Wang, J Liu, W Liu, C Deng… - arXiv preprint arXiv …, 2023 - arxiv.org
As a primary means of information acquisition, information retrieval (IR) systems, such as
search engines, have integrated themselves into our daily lives. These systems also serve …

Improving factual consistency for knowledge-grounded dialogue systems via knowledge enhancement and alignment

B Xue, W Wang, H Wang, F Mi, R Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Pretrained language models (PLMs) based knowledge-grounded dialogue systems are
prone to generate responses that are factually inconsistent with the provided knowledge …

Rethinking Conversational Agents in the Era of LLMs: Proactivity, Non-collaborativity, and Beyond

Y Deng, W Lei, M Huang, TS Chua - … in Information Retrieval in the Asia …, 2023 - dl.acm.org
Conversational systems are designed to offer human users social support or functional
services through natural language interactions. Typical conversation researches mainly …

Large Language Model Powered Agents for Information Retrieval

A Zhang, Y Deng, Y Lin, X Chen, JR Wen… - Proceedings of the 47th …, 2024 - dl.acm.org
The vital goal of information retrieval today extends beyond merely connecting users with
relevant information they search for. It also aims to enrich the diversity, personalization, and …

Aligning Uncertainty: Leveraging LLMs to Analyze Uncertainty Transfer in Text Summarization

Z Kolagar, A Zarcone - Proceedings of the 1st Workshop on …, 2024 - aclanthology.org
Automatically generated summaries can be evaluated along different dimensions, one being
how faithfully the uncertainty from the source text is conveyed in the summary. We present a …

HaluEval-Wild: Evaluating Hallucinations of Language Models in the Wild

Z Zhu, Z Sun, Y Yang - arXiv preprint arXiv:2403.04307, 2024 - arxiv.org
Hallucinations pose a significant challenge to the reliability of large language models
(LLMs) in critical domains. Recent benchmarks designed to assess LLM hallucinations …

KnowTuning: Knowledge-aware Fine-tuning for Large Language Models

Y Lyu, L Yan, S Wang, H Shi, D Yin, P Ren… - arXiv preprint arXiv …, 2024 - arxiv.org
Despite their success at many natural language processing (NLP) tasks, large language
models (LLMs) still struggle to effectively leverage knowledge for knowledge-intensive tasks …

ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models

J Oh, S Kim, J Seo, J Wang, R Xu, X Xie… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) have achieved unprecedented performance in various
applications, yet their evaluation remains a critical issue. Existing hallucination benchmarks …

Beyond the Turn-Based Game: Enabling Real-Time Conversations with Duplex Models

X Zhang, Y Chen, S Hu, X Han, Z Xu, Y Xu… - arXiv preprint arXiv …, 2024 - arxiv.org
As large language models (LLMs) increasingly permeate daily lives, there is a growing
demand for real-time interactions that mirror human conversations. Traditional turn-based …