Recent advances in natural language processing via large pre-trained language models: A survey

B Min, H Ross, E Sulem, APB Veyseh… - ACM Computing …, 2023 - dl.acm.org
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …

Large language models for software engineering: A systematic literature review

X Hou, Y Zhao, Y Liu, Z Yang, K Wang, L Li… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) have significantly impacted numerous domains, notably
including Software Engineering (SE). Nevertheless, a well-rounded understanding of the …

[HTML][HTML] Large language models encode clinical knowledge

K Singhal, S Azizi, T Tu, SS Mahdavi, J Wei, HW Chung… - Nature, 2023 - nature.com
Large language models (LLMs) have demonstrated impressive capabilities, but the bar for
clinical applications is high. Attempts to assess the clinical knowledge of models typically …

A prompt pattern catalog to enhance prompt engineering with chatgpt

J White, Q Fu, S Hays, M Sandborn, C Olea… - arXiv preprint arXiv …, 2023 - arxiv.org
Prompt engineering is an increasingly important skill set needed to converse effectively with
large language models (LLMs), such as ChatGPT. Prompts are instructions given to an LLM …

Large language models encode clinical knowledge

K Singhal, S Azizi, T Tu, SS Mahdavi, J Wei… - arXiv preprint arXiv …, 2022 - arxiv.org
Large language models (LLMs) have demonstrated impressive capabilities in natural
language understanding and generation, but the quality bar for medical and clinical …

P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks

X Liu, K Ji, Y Fu, WL Tam, Z Du, Z Yang… - arXiv preprint arXiv …, 2021 - arxiv.org
Prompt tuning, which only tunes continuous prompts with a frozen language model,
substantially reduces per-task storage and memory usage at training. However, in the …

Prompt distribution learning

Y Lu, J Liu, Y Zhang, Y Liu… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
We present prompt distribution learning for effectively adapting a pre-trained vision-
language model to address downstream recognition tasks. Our method not only learns low …

Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing

P Liu, W Yuan, J Fu, Z Jiang, H Hayashi… - ACM Computing …, 2023 - dl.acm.org
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …

Exploring visual prompts for adapting large-scale models

H Bahng, A Jahanian, S Sankaranarayanan… - arXiv preprint arXiv …, 2022 - arxiv.org
We investigate the efficacy of visual prompting to adapt large-scale models in vision.
Following the recent approach from prompt tuning and adversarial reprogramming, we learn …

[HTML][HTML] Pre-trained language models and their applications

H Wang, J Li, H Wu, E Hovy, Y Sun - Engineering, 2022 - Elsevier
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …