The recent years have seen remarkable success in the use of deep neural networks on text summarization. However, there is no clear understanding of\textit {why} they perform so well …
S Xu, X Zhang, Y Wu, F Wei - Proceedings of the AAAI conference on …, 2022 - ojs.aaai.org
Contrastive learning models have achieved great success in unsupervised visual representation learning, which maximize the similarities between feature representations of …
B Gunel, C Zhu, M Zeng, X Huang - arXiv preprint arXiv:2006.15435, 2020 - arxiv.org
Neural models have become successful at producing abstractive summaries that are human- readable and fluent. However, these models have two critical shortcomings: they often don't …
Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal …
Automatic summarization generates concise summaries that contain key ideas of source documents. As the most mainstream datasets for the news sub-domain, CNN/DailyMail and …
Text summarization aims at compressing long documents into a shorter form that conveys the most important parts of the original document. Despite increased interest in the …
Researchers use figures to communicate rich, complex information in scientific papers. The captions of these figures are critical to conveying effective messages. However, low-quality …
Reinforcement Learning (RL) based document summarisation systems yield state-of-the-art performance in terms of ROUGE scores, because they directly use ROUGE as the rewards …
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents …