Abstract Information Extraction (IE) aims to extract structural knowledge from plain natural language texts. Recently, generative Large Language Models (LLMs) have demonstrated …
Text classification is the most fundamental and essential task in natural language processing. The last decade has seen a surge of research in this area due to the …
Recently, prompt tuning has been widely applied to stimulate the rich knowledge in pre- trained language models (PLMs) to serve NLP tasks. Although prompt tuning has achieved …
Recently, prompt-tuning has achieved promising results for specific few-shot classification tasks. The core idea of prompt-tuning is to insert text pieces (ie, templates) into the input and …
Prompt-learning has become a new paradigm in modern natural language processing, which directly adapts pre-trained language models (PLMs) to $ cloze $-style prediction …
S Park - arXiv preprint arXiv:2105.09680, 2021 - academia.edu
We introduce Korean Language Understanding Evaluation (KLUE) benchmark. KLUE is a collection of 8 Korean natural language understanding (NLU) tasks, including Topic …
Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts …
I Tenney - arXiv preprint arXiv:1905.05950, 2019 - fq.pkwyx.com
Pre-trained text encoders have rapidly advanced the state of the art on many NLP tasks. We focus on one such model, BERT, and aim to quantify where linguistic information is captured …
General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction. Efforts have been made to build general purpose …