Y Liu, P Liu - arXiv preprint arXiv:2106.01890, 2021 - arxiv.org
In this paper, we present a conceptually simple while empirically powerful framework for abstractive summarization, SimCLS, which can bridge the gap between the learning …
We present a survey covering the state of the art in low-resource machine translation (MT) research. There are currently around 7,000 languages spoken in the world and almost all …
X Cheng, B Cao, Q Ye, Z Zhu, H Li, Y Zou - arXiv preprint arXiv …, 2023 - arxiv.org
Spoken language understanding (SLU) is a fundamental task in the task-oriented dialogue systems. However, the inevitable errors from automatic speech recognition (ASR) usually …
Graph neural networks (GNNs) have emerged as the state-of-the-art paradigm for collaborative filtering (CF). To improve the representation quality over limited labeled data …
R Ye, M Wang, L Li - arXiv preprint arXiv:2205.02444, 2022 - arxiv.org
How can we learn unified representations for spoken utterances and their written text? Learning similar representations for semantically similar speech and text is important for …
Zero-shot stance detection (ZSSD) is challenging as it requires detecting the stance of previously unseen targets during the inference stage. Being able to detect the target-related …
In recent years, considerable research has been dedicated to the application of neural models in the field of natural language generation (NLG). The primary objective is to …
R Zhang, Y Ji, Y Zhang… - Proceedings of the 2022 …, 2022 - aclanthology.org
Current NLP models heavily rely on effective representation learning algorithms. Contrastive learning is one such technique to learn an embedding space such that similar data sample …
TT Nguyen, AT Luu - Proceedings of the AAAI Conference on Artificial …, 2022 - ojs.aaai.org
Current state-of-the-art cross-lingual summarization models employ multi-task learning paradigm, which works on a shared vocabulary module and relies on the self-attention …