Recent advances in natural language processing via large pre-trained language models: A survey

B Min, H Ross, E Sulem, APB Veyseh… - ACM Computing …, 2023 - dl.acm.org
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …

A comprehensive survey on relation extraction: Recent advances and new frontiers

X Zhao, Y Deng, M Yang, L Wang, R Zhang… - ACM Computing …, 2024 - dl.acm.org
Relation extraction (RE) involves identifying the relations between entities from underlying
content. RE serves as the foundation for many natural language processing (NLP) and …

P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks

X Liu, K Ji, Y Fu, WL Tam, Z Du, Z Yang… - arXiv preprint arXiv …, 2021 - arxiv.org
Prompt tuning, which only tunes continuous prompts with a frozen language model,
substantially reduces per-task storage and memory usage at training. However, in the …

Knowledge graph-enhanced molecular contrastive learning with functional prompt

Y Fang, Q Zhang, N Zhang, Z Chen, X Zhuang… - Nature Machine …, 2023 - nature.com
Deep learning models can accurately predict molecular properties and help making the
search for potential drug candidates faster and more efficient. Many existing methods are …

DeepStruct: Pretraining of language models for structure prediction

C Wang, X Liu, Z Chen, H Hong, J Tang… - arXiv preprint arXiv …, 2022 - arxiv.org
We introduce a method for improving the structural understanding abilities of language
models. Unlike previous approaches that finetune the models with task-specific …

Universal information extraction as unified semantic matching

J Lou, Y Lu, D Dai, W Jia, H Lin, X Han… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
The challenge of information extraction (IE) lies in the diversity of label schemas and the
heterogeneity of structures. Traditional methods require task-specific model design and rely …

Llmaaa: Making large language models as active annotators

R Zhang, Y Li, Y Ma, M Zhou, L Zou - arXiv preprint arXiv:2310.19596, 2023 - arxiv.org
Prevalent supervised learning methods in natural language processing (NLP) are
notoriously data-hungry, which demand large amounts of high-quality annotated data. In …

True few-shot learning with Prompts—A real-world perspective

T Schick, H Schütze - … of the Association for Computational Linguistics, 2022 - direct.mit.edu
Prompt-based approaches excel at few-shot learning. However, Perez et al. recently cast
doubt on their performance as they had difficulty getting good results in a “true” few-shot …

Augmenting low-resource text classification with graph-grounded pre-training and prompting

Z Wen, Y Fang - Proceedings of the 46th International ACM SIGIR …, 2023 - dl.acm.org
Text classification is a fundamental problem in information retrieval with many real-world
applications, such as predicting the topics of online articles and the categories of e …

Textual entailment for event argument extraction: Zero-and few-shot with multi-source learning

O Sainz, I Gonzalez-Dios, OL de Lacalle, B Min… - arXiv preprint arXiv …, 2022 - arxiv.org
Recent work has shown that NLP tasks such as Relation Extraction (RE) can be recasted as
Textual Entailment tasks using verbalizations, with strong performance in zero-shot and few …