Pre-trained language models for text generation: A survey

J Li, T Tang, WX Zhao, JY Nie, JR Wen - ACM Computing Surveys, 2024 - dl.acm.org
Text Generation aims to produce plausible and readable text in human language from input
data. The resurgence of deep learning has greatly advanced this field, in particular, with the …

Internet-augmented language models through few-shot prompting for open-domain question answering

A Lazaridou, E Gribovskaya, W Stokowiec… - arXiv preprint arXiv …, 2022 - arxiv.org
In this work, we aim to capitalize on the unique few-shot capabilities of large-scale language
models (LSLMs) to overcome some of their challenges with respect to grounding to factual …

Autoregressive entity retrieval

N De Cao, G Izacard, S Riedel, F Petroni - arXiv preprint arXiv:2010.00904, 2020 - arxiv.org
Entities are at the center of how we represent and aggregate knowledge. For instance,
Encyclopedias such as Wikipedia are structured by entities (eg, one per Wikipedia article) …

Deep learning for text style transfer: A survey

D Jin, Z Jin, Z Hu, O Vechtomova… - Computational …, 2022 - direct.mit.edu
Text style transfer is an important task in natural language generation, which aims to control
certain attributes in the generated text, such as politeness, emotion, humor, and many …

Neurologic a* esque decoding: Constrained text generation with lookahead heuristics

X Lu, S Welleck, P West, L Jiang, J Kasai… - arXiv preprint arXiv …, 2021 - arxiv.org
The dominant paradigm for neural text generation is left-to-right decoding from
autoregressive language models. Constrained or controllable generation under complex …

CommonGen: A constrained text generation challenge for generative commonsense reasoning

BY Lin, W Zhou, M Shen, P Zhou… - arXiv preprint arXiv …, 2019 - arxiv.org
Recently, large-scale pre-trained language models have demonstrated impressive
performance on several commonsense-reasoning benchmark datasets. However, building …

Controlled text generation with natural language instructions

W Zhou, YE Jiang, E Wilcox… - International …, 2023 - proceedings.mlr.press
Large language models can be prompted to pro-duce fluent output for a wide range of tasks
without being specifically trained to do so. Nevertheless, it is notoriously difficult to control …

Adaptive machine translation with large language models

Y Moslem, R Haque, JD Kelleher, A Way - arXiv preprint arXiv:2301.13294, 2023 - arxiv.org
Consistency is a key requirement of high-quality translation. It is especially important to
adhere to pre-approved terminology and adapt to corrected translations in domain-specific …

Decoding methods in neural language generation: a survey

S Zarrieß, H Voigt, S Schüz - Information, 2021 - mdpi.com
Neural encoder-decoder models for language generation can be trained to predict words
directly from linguistic or non-linguistic inputs. When generating with these so-called end-to …

Ctrlsum: Towards generic controllable text summarization

J He, W Kryściński, B McCann, N Rajani… - arXiv preprint arXiv …, 2020 - arxiv.org
Current summarization systems yield generic summaries that are disconnected from users'
preferences and expectations. To address this limitation, we present CTRLsum, a novel …