K Bu, Y Liu, X Ju - Knowledge-Based Systems, 2023 - Elsevier
Sentiment analysis is one of the traditional well-known tasks in Natural Language Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of …
The spread of rumors along with breaking events seriously hinders the truth in the era of social media. Previous studies reveal that due to the lack of annotated resources, rumors …
The BLOOM model is a large publicly available multilingual language model, but its pretraining was limited to 46 languages. To extend the benefits of BLOOM to other …
We systematically investigate lightweight strategies to adapt large language models (LLMs) for the task of radiology report summarization (RRS). Specifically, we focus on domain …
The availability of large, high-quality datasets has been a major driver of recent progress in question answering (QA). Such annotated datasets, however, are difficult and costly to …
In this paper, we explore the challenging problem of performing a generative task in a target language when labeled data is only available in English, using summarization as a case …
Large-scale generative language models such as GPT-3 are competitive few-shot learners. While these models are known to be able to jointly represent many different languages, their …
K Qi, H Wan, J Du, H Chen - … of the 60th Annual Meeting of the …, 2022 - aclanthology.org
Cross-lingual natural language inference (XNLI) is a fundamental task in cross-lingual natural language understanding. Recently this task is commonly addressed by pre-trained …
Recent research has made impressive progress in large-scale multimodal pre-training. In the context of the rapid growth of model size, it is necessary to seek efficient and flexible …