A comprehensive survey on pretrained foundation models: A history from bert to chatgpt

C Zhou, Q Li, C Li, J Yu, Y Liu, G Wang… - International Journal of …, 2024 - Springer
Abstract Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks across different data modalities. A PFM (eg, BERT, ChatGPT, GPT-4) is …

Large language models for generative information extraction: A survey

D Xu, W Chen, W Peng, C Zhang, T Xu, X Zhao… - Frontiers of Computer …, 2024 - Springer
Abstract Information Extraction (IE) aims to extract structural knowledge from plain natural
language texts. Recently, generative Large Language Models (LLMs) have demonstrated …

A survey on text classification: From traditional to deep learning

Q Li, H Peng, J Li, C Xia, R Yang, L Sun… - ACM Transactions on …, 2022 - dl.acm.org
Text classification is the most fundamental and essential task in natural language
processing. The last decade has seen a surge of research in this area due to the …

[HTML][HTML] Ptr: Prompt tuning with rules for text classification

X Han, W Zhao, N Ding, Z Liu, M Sun - AI Open, 2022 - Elsevier
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …

Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction

X Chen, N Zhang, X Xie, S Deng, Y Yao, C Tan… - Proceedings of the …, 2022 - dl.acm.org
Recently, prompt-tuning has achieved promising results for specific few-shot classification
tasks. The core idea of prompt-tuning is to insert text pieces (ie, templates) into the input and …

Openprompt: An open-source framework for prompt-learning

N Ding, S Hu, W Zhao, Y Chen, Z Liu, HT Zheng… - arXiv preprint arXiv …, 2021 - arxiv.org
Prompt-learning has become a new paradigm in modern natural language processing,
which directly adapts pre-trained language models (PLMs) to $ cloze $-style prediction …

[PDF][PDF] KLUE: Korean Language Understanding Evaluation

S Park - arXiv preprint arXiv:2105.09680, 2021 - academia.edu
We introduce Korean Language Understanding Evaluation (KLUE) benchmark. KLUE is a
collection of 8 Korean natural language understanding (NLU) tasks, including Topic …

Knowledge enhanced contextual word representations

ME Peters, M Neumann, RL Logan IV… - arXiv preprint arXiv …, 2019 - arxiv.org
Contextual word representations, typically trained on unstructured, unlabeled text, do not
contain any explicit grounding to real world entities and are often unable to remember facts …

[PDF][PDF] BERT rediscovers the classical NLP pipeline

I Tenney - arXiv preprint arXiv:1905.05950, 2019 - fq.pkwyx.com
Pre-trained text encoders have rapidly advanced the state of the art on many NLP tasks. We
focus on one such model, BERT, and aim to quantify where linguistic information is captured …

Matching the blanks: Distributional similarity for relation learning

LB Soares, N FitzGerald, J Ling… - arXiv preprint arXiv …, 2019 - arxiv.org
General purpose relation extractors, which can model arbitrary relations, are a core
aspiration in information extraction. Efforts have been made to build general purpose …