Recent advances in natural language processing via large pre-trained language models: A survey

B Min, H Ross, E Sulem, APB Veyseh… - ACM Computing …, 2023 - dl.acm.org
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …

Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing

P Liu, W Yuan, J Fu, Z Jiang, H Hayashi… - ACM Computing …, 2023 - dl.acm.org
This article surveys and organizes research works in a new paradigm in natural language
processing, which we dub “prompt-based learning.” Unlike traditional supervised learning …

Llms for knowledge graph construction and reasoning: Recent capabilities and future opportunities

Y Zhu, X Wang, J Chen, S Qiao, Y Ou, Y Yao, S Deng… - World Wide Web, 2024 - Springer
This paper presents an exhaustive quantitative and qualitative evaluation of Large
Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning. We …

P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks

X Liu, K Ji, Y Fu, WL Tam, Z Du, Z Yang… - arXiv preprint arXiv …, 2021 - arxiv.org
Prompt tuning, which only tunes continuous prompts with a frozen language model,
substantially reduces per-task storage and memory usage at training. However, in the …

[HTML][HTML] GPT understands, too

X Liu, Y Zheng, Z Du, M Ding, Y Qian, Z Yang, J Tang - AI Open, 2024 - Elsevier
Prompting a pretrained language model with natural language patterns has been proved
effective for natural language understanding (NLU). However, our preliminary study reveals …

[HTML][HTML] Ptr: Prompt tuning with rules for text classification

X Han, W Zhao, N Ding, Z Liu, M Sun - AI Open, 2022 - Elsevier
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre-
trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …

Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification

S Hu, N Ding, H Wang, Z Liu, J Wang, J Li… - arXiv preprint arXiv …, 2021 - arxiv.org
Tuning pre-trained language models (PLMs) with task-specific prompts has been a
promising approach for text classification. Particularly, previous studies suggest that prompt …

Exploring collaboration mechanisms for llm agents: A social psychology view

J Zhang, X Xu, N Zhang, R Liu, B Hooi… - arXiv preprint arXiv …, 2023 - arxiv.org
As Natural Language Processing (NLP) systems are increasingly employed in intricate
social environments, a pressing query emerges: Can these NLP systems mirror human …

Can knowledge graphs reduce hallucinations in llms?: A survey

G Agrawal, T Kumarage, Z Alghamdi, H Liu - arXiv preprint arXiv …, 2023 - arxiv.org
The contemporary LLMs are prone to producing hallucinations, stemming mainly from the
knowledge gaps within the models. To address this critical limitation, researchers employ …

A survey of knowledge enhanced pre-trained language models

L Hu, Z Liu, Z Zhao, L Hou, L Nie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …