Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing

P Liu, W Yuan, J Fu, Z Jiang, H Hayashi… - ACM Computing …, 2023 - dl.acm.org
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …

A survey of knowledge-enhanced text generation

W Yu, C Zhu, Z Li, Z Hu, Q Wang, H Ji… - ACM Computing …, 2022 - dl.acm.org
The goal of text-to-text generation is to make machines express like a human in many
applications such as conversation, summarization, and translation. It is one of the most …

Unified language model pre-training for natural language understanding and generation

L Dong, N Yang, W Wang, F Wei… - Advances in neural …, 2019 - proceedings.neurips.cc
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-
tuned for both natural language understanding and generation tasks. The model is pre …

Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training

W Qi, Y Yan, Y Gong, D Liu, N Duan, J Chen… - arXiv preprint arXiv …, 2020 - arxiv.org
This paper presents a new sequence-to-sequence pre-training model called ProphetNet,
which introduces a novel self-supervised objective named future n-gram prediction and the …

GSum: A general framework for guided neural abstractive summarization

ZY Dou, P Liu, H Hayashi, Z Jiang, G Neubig - arXiv preprint arXiv …, 2020 - arxiv.org
Neural abstractive summarization models are flexible and can produce coherent summaries,
but they are sometimes unfaithful and can be difficult to control. While previous studies …

Improving named entity recognition by external context retrieving and cooperative learning

X Wang, Y Jiang, N Bach, T Wang, Z Huang… - arXiv preprint arXiv …, 2021 - arxiv.org
Recent advances in Named Entity Recognition (NER) show that document-level contexts
can significantly improve model performance. In many application scenarios, however, such …

Deep reinforcement and transfer learning for abstractive text summarization: A review

A Alomari, N Idris, AQM Sabri, I Alsmadi - Computer Speech & Language, 2022 - Elsevier
Abstract Automatic Text Summarization (ATS) is an important area in Natural Language
Processing (NLP) with the goal of shortening a long text into a more compact version by …

A survey on retrieval-augmented text generation

H Li, Y Su, D Cai, Y Wang, L Liu - arXiv preprint arXiv:2202.01110, 2022 - arxiv.org
Recently, retrieval-augmented text generation attracted increasing attention of the
computational linguistics community. Compared with conventional generation models …

Hybrid retrieval-generation reinforced agent for medical image report generation

Y Li, X Liang, Z Hu, EP Xing - Advances in neural …, 2018 - proceedings.neurips.cc
Generating long and coherent reports to describe medical images poses challenges to
bridging visual patterns with informative human linguistic descriptions. We propose a novel …

Knowledge-driven encode, retrieve, paraphrase for medical image report generation

CY Li, X Liang, Z Hu, EP Xing - Proceedings of the AAAI conference on …, 2019 - aaai.org
Generating long and semantic-coherent reports to describe medical images poses great
challenges towards bridging visual and linguistic modalities, incorporating medical domain …