Abstractive summarization systems today produce fluent and relevant output, but often" hallucinate" statements not supported by the source text. We analyze the connection …
State-of-the-art abstractive summarization systems frequently hallucinate content that is not supported by the source document, mainly due to noise in the training dataset. Existing …
Pre-trained language models (eg BART) have shown impressive results when fine-tuned on large summarization datasets. However, little is understood about this fine-tuning process …
Y Mao, X Ren, H Ji, J Han - arXiv preprint arXiv:2010.12723, 2020 - arxiv.org
Despite significant progress, state-of-the-art abstractive summarization methods are still prone to hallucinate content inconsistent with the source document. In this paper, we …
Despite significant progress in understanding and improving faithfulness in abstractive summarization, the question of how decoding strategies affect faithfulness is less studied …
S Sun, W Li - arXiv preprint arXiv:2108.11846, 2021 - arxiv.org
Encoder-decoder models have achieved remarkable success in abstractive text summarization, which aims to compress one or more documents into a shorter version …
An advantage of seq2seq abstractive summarization models is that they generate text in a free-form manner, but this flexibility makes it difficult to interpret model behavior. In this work …
Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based …
Y Liu, Q Jia, K Zhu - Proceedings of the 60th Annual Meeting of …, 2022 - aclanthology.org
Previous length-controllable summarization models mostly control lengths at the decoding stage, whereas the encoding or the selection of information from the source document is not …