Survey of hallucination in natural language generation

Z Ji, N Lee, R Frieske, T Yu, D Su, Y Xu, E Ishii… - ACM Computing …, 2023 - dl.acm.org
Natural Language Generation (NLG) has improved exponentially in recent years thanks to
the development of sequence-to-sequence deep learning technologies such as Transformer …

Large language models for robotics: A survey

F Zeng, W Gan, Y Wang, N Liu, PS Yu - arXiv preprint arXiv:2311.07226, 2023 - arxiv.org
The human ability to learn, generalize, and control complex manipulation tasks through multi-
modality feedback suggests a unique capability, which we refer to as dexterity intelligence …

Large language models: A survey

S Minaee, T Mikolov, N Nikzad, M Chenaghlu… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have drawn a lot of attention due to their strong
performance on a wide range of natural language tasks, since the release of ChatGPT in …

PLACES: Prompting language models for social conversation synthesis

M Chen, A Papangelis, C Tao, S Kim… - arXiv preprint arXiv …, 2023 - arxiv.org
Collecting high quality conversational data can be very expensive for most applications and
infeasible for others due to privacy, ethical, or similar concerns. A promising direction to …

You impress me: Dialogue generation via mutual persona perception

Q Liu, Y Chen, B Chen, JG Lou, Z Chen, B Zhou… - arXiv preprint arXiv …, 2020 - arxiv.org
Despite the continuing efforts to improve the engagingness and consistency of chit-chat
dialogue systems, the majority of current work simply focus on mimicking human-like …

Faithfulness in natural language generation: A systematic survey of analysis, evaluation and optimization methods

W Li, W Wu, M Chen, J Liu, X Xiao, H Wu - arXiv preprint arXiv:2203.05227, 2022 - arxiv.org
Natural Language Generation (NLG) has made great progress in recent years due to the
development of deep learning techniques such as pre-trained language models. This …

Enhancing self-consistency and performance of pre-trained language models through natural language inference

E Mitchell, JJ Noh, S Li, WS Armstrong… - arXiv preprint arXiv …, 2022 - arxiv.org
While large pre-trained language models are powerful, their predictions often lack logical
consistency across test inputs. For example, a state-of-the-art Macaw question-answering …

Like hiking? you probably enjoy nature: Persona-grounded dialog with commonsense expansions

BP Majumder, H Jhamtani, T Berg-Kirkpatrick… - arXiv preprint arXiv …, 2020 - arxiv.org
Existing persona-grounded dialog models often fail to capture simple implications of given
persona descriptions, something which humans are able to do seamlessly. For example …

One chatbot per person: Creating personalized chatbots based on implicit user profiles

Z Ma, Z Dou, Y Zhu, H Zhong, JR Wen - Proceedings of the 44th …, 2021 - dl.acm.org
Personalized chatbots focus on endowing chatbots with a consistent personality to behave
like real users, give more informative responses, and further act as personal assistants …

Generate, delete and rewrite: A three-stage framework for improving persona consistency of dialogue generation

H Song, Y Wang, WN Zhang, X Liu, T Liu - arXiv preprint arXiv:2004.07672, 2020 - arxiv.org
Maintaining a consistent personality in conversations is quite natural for human beings, but
is still a non-trivial task for machines. The persona-based dialogue generation task is thus …