Can Editing LLMs Inject Harm?

C Chen, B Huang, Z Li, Z Chen, S Lai, X Xu… - arXiv preprint arXiv …, 2024 - arxiv.org
Knowledge editing has been increasingly adopted to correct the false or outdated
knowledge in Large Language Models (LLMs). Meanwhile, one critical but under-explored …

Can Knowledge Editing Really Correct Hallucinations?

B Huang, C Chen, X Xu, A Payani, K Shu - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) suffer from hallucinations, referring to the non-factual
information in generated content, despite their superior capacities across tasks. Meanwhile …

Enhancing LLM Capabilities Beyond Scaling Up

W Yin, M Chen, R Zhang, B Zhou… - Proceedings of the …, 2024 - aclanthology.org
General-purpose large language models (LLMs) are progressively expanding both in scale
and access to unpublic training data. This has led to notable progress in a variety of AI …

Can We Trust the Performance Evaluation of Uncertainty Estimation Methods in Text Summarization?

J He, R Yang, L Yu, C Li, R Jia, F Chen, M Jin… - arXiv preprint arXiv …, 2024 - arxiv.org
Text summarization, a key natural language generation (NLG) task, is vital in various
domains. However, the high cost of inaccurate summaries in risk-critical applications …