The goal of text-to-text generation is to make machines express like a human in many applications such as conversation, summarization, and translation. It is one of the most …
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine- tuned for both natural language understanding and generation tasks. The model is pre …
This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the …
Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control. While previous studies …
Recent advances in Named Entity Recognition (NER) show that document-level contexts can significantly improve model performance. In many application scenarios, however, such …
Abstract Automatic Text Summarization (ATS) is an important area in Natural Language Processing (NLP) with the goal of shortening a long text into a more compact version by …
H Li, Y Su, D Cai, Y Wang, L Liu - arXiv preprint arXiv:2202.01110, 2022 - arxiv.org
Recently, retrieval-augmented text generation attracted increasing attention of the computational linguistics community. Compared with conventional generation models …
Generating long and coherent reports to describe medical images poses challenges to bridging visual patterns with informative human linguistic descriptions. We propose a novel …
Generating long and semantic-coherent reports to describe medical images poses great challenges towards bridging visual and linguistic modalities, incorporating medical domain …