Good visual guidance makes a better extractor: Hierarchical visual prefix for multimodal entity and relation extraction

X Chen, N Zhang, L Li, Y Yao, S Deng, C Tan… - arXiv preprint arXiv …, 2022 - arxiv.org
Multimodal named entity recognition and relation extraction (MNER and MRE) is a
fundamental and crucial branch in information extraction. However, existing approaches for …

NSP-Bert: A prompt-based few-shot learner through an original pre-training task--next sentence prediction

Y Sun, Y Zheng, C Hao, H Qiu - arXiv preprint arXiv:2109.03564, 2021 - arxiv.org
Using prompts to utilize language models to perform various downstream tasks, also known
as prompt-based learning or prompt-learning, has lately gained significant success in …

A multi-task semantic decomposition framework with task-specific pre-training for few-shot ner

G Dong, Z Wang, J Zhao, G Zhao, D Guo, D Fu… - Proceedings of the …, 2023 - dl.acm.org
The objective of few-shot named entity recognition is to identify named entities with limited
labeled instances. Previous works have primarily focused on optimizing the traditional token …

Instructedit: Instruction-based knowledge editing for large language models

N Zhang, B Tian, S Cheng, X Liang, Y Hu… - arXiv preprint arXiv …, 2024 - arxiv.org
Knowledge editing for large language models can offer an efficient solution to alter a
model's behavior without negatively impacting the overall performance. However, the …

Hard Sample Aware Prompt-Tuning

Y Xu, Q An, J Zhang, P Li, Z Nie - … of the 61st Annual Meeting of …, 2023 - aclanthology.org
Prompt-tuning based few-shot learning has garnered increasing attention in recent years
due to its efficiency and promising capability. To achieve the best performance for NLP tasks …

Demonsf: A multi-task demonstration-based generative framework for noisy slot filling task

G Dong, T Hui, Z GongQue, J Zhao, D Guo… - arXiv preprint arXiv …, 2023 - arxiv.org
Recently, prompt-based generative frameworks have shown impressive capabilities in
sequence labeling tasks. However, in practical dialogue scenarios, relying solely on …

Tuning llms with contrastive alignment instructions for machine translation in unseen, low-resource languages

Z Mao, Y Yu - arXiv preprint arXiv:2401.05811, 2024 - arxiv.org
This article introduces contrastive alignment instructions (AlignInstruct) to address two
challenges in machine translation (MT) on large language models (LLMs). One is the …

Let Me Check the Examples: Enhancing Demonstration Learning via Explicit Imitation

S Wang, K Wei, H Zhang, Y Li, W Wu - arXiv preprint arXiv:2209.00455, 2022 - arxiv.org
Demonstration learning aims to guide the prompt prediction via providing answered
demonstrations in the few shot settings. Despite achieving promising results, existing work …

Outperforming Larger Models on Text Classification Through Continued Pre-training

Y Zheng, M Liu, Z Ao, W Hao, H Zhang… - … Conference on Natural …, 2024 - Springer
Generative large language models (LLMs), such as GPT-4, have demonstrated remarkable
performance across a wide range of NLP tasks. The increased number of LLMs' parameters …

Chinese Event Causality Identification Based on Retrieval Enhancement

Y Gao, Y Ren, J Rao, Z Chen, Q Xi, H Wang… - … Conference on Natural …, 2023 - Springer
Event causality identification (ECI) is a critical and challenging information extraction task,
which aims to identify whether there is a causal relationship between the two events. To …