Meta self-training for few-shot neural sequence labeling

Y Wang, S Mukherjee, H Chu, Y Tu, M Wu… - Proceedings of the 27th …, 2021 - dl.acm.org
Neural sequence labeling is widely adopted for many Natural Language Processing (NLP)
tasks, such as Named Entity Recognition (NER) and slot tagging for dialog systems and …

Empower sequence labeling with task-aware neural language model

L Liu, J Shang, X Ren, F Xu, H Gui, J Peng… - Proceedings of the AAAI …, 2018 - ojs.aaai.org
Linguistic sequence labeling is a general approach encompassing a variety of problems,
such as part-of-speech tagging and named entity recognition. Recent advances in neural …

An enhanced span-based decomposition method for few-shot sequence labeling

P Wang, R Xu, T Liu, Q Zhou, Y Cao, B Chang… - arXiv preprint arXiv …, 2021 - arxiv.org
Few-Shot Sequence Labeling (FSSL) is a canonical paradigm for the tagging models, eg,
named entity recognition and slot filling, to generalize on an emerging, resource-scarce …

Augmented natural language for generative sequence labeling

B Athiwaratkun, CN Santos, J Krone… - arXiv preprint arXiv …, 2020 - arxiv.org
We propose a generative framework for joint sequence labeling and sentence-level
classification. Our model performs multiple sequence labeling tasks at once using a single …

Instructionner: A multi-task instruction-based generative framework for few-shot ner

L Wang, R Li, Y Yan, Y Yan, S Wang, W Wu… - arXiv preprint arXiv …, 2022 - arxiv.org
Recently, prompt-based methods have achieved significant performance in few-shot
learning scenarios by bridging the gap between language model pre-training and fine …

Want to reduce labeling cost? GPT-3 can help

S Wang, Y Liu, Y Xu, C Zhu, M Zeng - arXiv preprint arXiv:2108.13487, 2021 - arxiv.org
Data annotation is a time-consuming and labor-intensive process for many NLP tasks.
Although there exist various methods to produce pseudo data labels, they are often task …

Semi-supervised sequence tagging with bidirectional language models

ME Peters, W Ammar, C Bhagavatula… - arXiv preprint arXiv …, 2017 - arxiv.org
Pre-trained word embeddings learned from unlabeled text have become a standard
component of neural network architectures for NLP tasks. However, in most cases, the …

Template-free prompt tuning for few-shot NER

R Ma, X Zhou, T Gui, Y Tan, L Li, Q Zhang… - arXiv preprint arXiv …, 2021 - arxiv.org
Prompt-based methods have been successfully applied in sentence-level few-shot learning
tasks, mostly owing to the sophisticated design of templates and label words. However …

Decomposed meta-learning for few-shot named entity recognition

T Ma, H Jiang, Q Wu, T Zhao, CY Lin - arXiv preprint arXiv:2204.05751, 2022 - arxiv.org
Few-shot named entity recognition (NER) systems aim at recognizing novel-class named
entities based on only a few labeled examples. In this paper, we present a decomposed …

A multi-lingual multi-task architecture for low-resource sequence labeling

Y Lin, S Yang, V Stoyanov, H Ji - … of the 56th Annual Meeting of …, 2018 - aclanthology.org
We propose a multi-lingual multi-task architecture to develop supervised models with a
minimal amount of labeled data for sequence labeling. In this new architecture, we combine …