Learning from few examples: A summary of approaches to few-shot learning

A Parnami, M Lee - arXiv preprint arXiv:2203.04291, 2022 - arxiv.org
Few-Shot Learning refers to the problem of learning the underlying pattern in the data just
from a few training samples. Requiring a large number of data samples, many deep learning …

A survey on knowledge graphs: Representation, acquisition, and applications

S Ji, S Pan, E Cambria, P Marttinen… - IEEE transactions on …, 2021 - ieeexplore.ieee.org
Human knowledge provides a formal understanding of the world. Knowledge graphs that
represent structural relations between entities have become an increasingly popular …

Graph neural networks: foundation, frontiers and applications

L Wu, P Cui, J Pei, L Zhao, X Guo - … of the 28th ACM SIGKDD Conference …, 2022 - dl.acm.org
The field of graph neural networks (GNNs) has seen rapid and incredible strides over the
recent years. Graph neural networks, also known as deep learning on graphs, graph …

Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction

X Chen, N Zhang, X Xie, S Deng, Y Yao, C Tan… - Proceedings of the …, 2022 - dl.acm.org
Recently, prompt-tuning has achieved promising results for specific few-shot classification
tasks. The core idea of prompt-tuning is to insert text pieces (ie, templates) into the input and …

A survey of knowledge enhanced pre-trained language models

L Hu, Z Liu, Z Zhao, L Hou, L Nie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …

KEPLER: A unified model for knowledge embedding and pre-trained language representation

X Wang, T Gao, Z Zhu, Z Zhang, Z Liu, J Li… - Transactions of the …, 2021 - direct.mit.edu
Pre-trained language representation models (PLMs) cannot well capture factual knowledge
from text. In contrast, knowledge embedding (KE) methods can effectively represent the …

ERNIE: Enhanced language representation with informative entities

Z Zhang, X Han, Z Liu, X Jiang, M Sun, Q Liu - arXiv preprint arXiv …, 2019 - arxiv.org
Neural language representation models such as BERT pre-trained on large-scale corpora
can well capture rich semantic patterns from plain text, and be fine-tuned to consistently …

Generalizing from a few examples: A survey on few-shot learning

Y Wang, Q Yao, JT Kwok, LM Ni - ACM computing surveys (csur), 2020 - dl.acm.org
Machine learning has been highly successful in data-intensive applications but is often
hampered when the data set is small. Recently, Few-shot Learning (FSL) is proposed to …

Matching the blanks: Distributional similarity for relation learning

LB Soares, N FitzGerald, J Ling… - arXiv preprint arXiv …, 2019 - arxiv.org
General purpose relation extractors, which can model arbitrary relations, are a core
aspiration in information extraction. Efforts have been made to build general purpose …

Few-nerd: A few-shot named entity recognition dataset

N Ding, G Xu, Y Chen, X Wang, X Han, P Xie… - arXiv preprint arXiv …, 2021 - arxiv.org
Recently, considerable literature has grown up around the theme of few-shot named entity
recognition (NER), but little published benchmark data specifically focused on the practical …