Automatic text summarization: A comprehensive survey

WS El-Kassas, CR Salama, AA Rafea… - Expert systems with …, 2021 - Elsevier
Abstract Automatic Text Summarization (ATS) is becoming much more important because of
the huge amount of textual content that grows exponentially on the Internet and the various …

Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text

S Gehrmann, E Clark, T Sellam - Journal of Artificial Intelligence Research, 2023 - jair.org
Abstract Evaluation practices in natural language generation (NLG) have many known flaws,
but improved evaluation approaches are rarely widely adopted. This issue has become …

Bartscore: Evaluating generated text as text generation

W Yuan, G Neubig, P Liu - Advances in Neural Information …, 2021 - proceedings.neurips.cc
A wide variety of NLP applications, such as machine translation, summarization, and dialog,
involve text generation. One major challenge for these applications is how to evaluate …

Graph neural networks for natural language processing: A survey

L Wu, Y Chen, K Shen, X Guo, H Gao… - … and Trends® in …, 2023 - nowpublishers.com
Deep learning has become the dominant approach in addressing various tasks in Natural
Language Processing (NLP). Although text inputs are typically represented as a sequence …

Big bird: Transformers for longer sequences

M Zaheer, G Guruganesh, KA Dubey… - Advances in neural …, 2020 - proceedings.neurips.cc
Transformers-based models, such as BERT, have been one of the most successful deep
learning models for NLP. Unfortunately, one of their core limitations is the quadratic …

Summeval: Re-evaluating summarization evaluation

AR Fabbri, W Kryściński, B McCann, C Xiong… - Transactions of the …, 2021 - direct.mit.edu
The scarcity of comprehensive up-to-date studies on evaluation metrics for text
summarization and the lack of consensus regarding evaluation protocols continue to inhibit …

On faithfulness and factuality in abstractive summarization

J Maynez, S Narayan, B Bohnet… - arXiv preprint arXiv …, 2020 - arxiv.org
It is well known that the standard likelihood training and approximate decoding objectives in
neural text generation models lead to less human-like responses for open-ended tasks such …

Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers

W Wang, F Wei, L Dong, H Bao… - Advances in Neural …, 2020 - proceedings.neurips.cc
Pre-trained language models (eg, BERT (Devlin et al., 2018) and its variants) have achieved
remarkable success in varieties of NLP tasks. However, these models usually consist of …

Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics

A Pagnoni, V Balachandran, Y Tsvetkov - arXiv preprint arXiv:2104.13346, 2021 - arxiv.org
Modern summarization models generate highly fluent but often factually unreliable outputs.
This motivated a surge of metrics attempting to measure the factuality of automatically …

Text summarization with pretrained encoders

Y Liu, M Lapata - arXiv preprint arXiv:1908.08345, 2019 - arxiv.org
Bidirectional Encoder Representations from Transformers (BERT) represents the latest
incarnation of pretrained language models which have recently advanced a wide range of …