Recent advances in natural language processing via large pre-trained language models: A survey

B Min, H Ross, E Sulem, APB Veyseh… - ACM Computing …, 2023 - dl.acm.org
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …

A survey on text classification algorithms: From text to predictions

A Gasparetto, M Marcuzzo, A Zangari, A Albarelli - Information, 2022 - mdpi.com
In recent years, the exponential growth of digital documents has been met by rapid progress
in text classification techniques. Newly proposed machine learning algorithms leverage the …

[HTML][HTML] ChatGPT: Jack of all trades, master of none

J Kocoń, I Cichecki, O Kaszyca, M Kochanek, D Szydło… - Information …, 2023 - Elsevier
OpenAI has released the Chat Generative Pre-trained Transformer (ChatGPT) and
revolutionized the approach in artificial intelligence to human-model interaction. The first …

A large language model for electronic health records

X Yang, A Chen, N PourNejatian, HC Shin… - NPJ digital …, 2022 - nature.com
There is an increasing interest in developing artificial intelligence (AI) systems to process
and interpret electronic health records (EHRs). Natural language processing (NLP) powered …

Multitask prompted training enables zero-shot task generalization

V Sanh, A Webson, C Raffel, SH Bach… - arXiv preprint arXiv …, 2021 - arxiv.org
Large language models have recently been shown to attain reasonable zero-shot
generalization on a diverse set of tasks (Brown et al., 2020). It has been hypothesized that …

Searching for efficient transformers for language modeling

D So, W Mańke, H Liu, Z Dai… - Advances in neural …, 2021 - proceedings.neurips.cc
Large Transformer models have been central to recent advances in natural language
processing. The training and inference costs of these models, however, have grown rapidly …

Differentiable prompt makes pre-trained language models better few-shot learners

N Zhang, L Li, X Chen, S Deng, Z Bi, C Tan… - arXiv preprint arXiv …, 2021 - arxiv.org
Large-scale pre-trained language models have contributed significantly to natural language
processing by demonstrating remarkable abilities as few-shot learners. However, their …

Less annotating, more classifying: Addressing the data scarcity issue of supervised machine learning with deep transfer learning and BERT-NLI

M Laurer, W Van Atteveldt, A Casas, K Welbers - Political Analysis, 2024 - cambridge.org
Supervised machine learning is an increasingly popular tool for analyzing large political text
corpora. The main disadvantage of supervised machine learning is the need for thousands …

Unlearn what you want to forget: Efficient unlearning for llms

J Chen, D Yang - arXiv preprint arXiv:2310.20150, 2023 - arxiv.org
Large language models (LLMs) have achieved significant progress from pre-training on and
memorizing a wide range of textual data, however, this process might suffer from privacy …

Crossfit: A few-shot learning challenge for cross-task generalization in nlp

Q Ye, BY Lin, X Ren - arXiv preprint arXiv:2104.08835, 2021 - arxiv.org
Humans can learn a new language task efficiently with only few examples, by leveraging
their knowledge obtained when learning prior tasks. In this paper, we explore whether and …