Iepile: Unearthing large-scale schema-based information extraction corpus

H Gui, L Yuan, H Ye, N Zhang, M Sun, L Liang… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) demonstrate remarkable potential across various domains;
however, they exhibit a significant performance gap in Information Extraction (IE). Note that …

IEPile: Unearthing Large Scale Schema-Conditioned Information Extraction Corpus

H Gui, L Yuan, H Ye, N Zhang, M Sun… - Proceedings of the …, 2024 - aclanthology.org
Abstract Large Language Models (LLMs) demonstrate remarkable potential across various
domains; however, they exhibit a significant performance gap in Information Extraction (IE) …

InstructIE: A Bilingual Instruction-based Information Extraction Dataset

H Gui, S Qiao, J Zhang, H Ye, M Sun, L Liang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models can perform well on general natural language tasks, but their
effectiveness is still not optimal for information extraction. Recent works indicate that the …

ADELIE: Aligning Large Language Models on Information Extraction

Y Qi, H Peng, X Wang, B Xu, L Hou, J Li - arXiv preprint arXiv:2405.05008, 2024 - arxiv.org
Large language models (LLMs) usually fall short on information extraction (IE) tasks and
struggle to follow the complex instructions of IE tasks. This primarily arises from LLMs not …

KnowCoder: Coding Structured Knowledge into LLMs for Universal Information Extraction

Z Li, Y Zeng, Y Zuo, W Ren, W Liu, M Su, Y Guo… - arXiv preprint arXiv …, 2024 - arxiv.org
In this paper, we propose KnowCoder, a Large Language Model (LLM) to conduct Universal
Information Extraction (UIE) via code generation. KnowCoder aims to develop a kind of …

Aligning instruction tasks unlocks large language models as zero-shot relation extractors

K Zhang, BJ Gutiérrez, Y Su - arXiv preprint arXiv:2305.11159, 2023 - arxiv.org
Recent work has shown that fine-tuning large language models (LLMs) on large-scale
instruction-following datasets substantially improves their performance on a wide range of …

Benchmarking large language models with augmented instructions for fine-grained information extraction

J Gao, H Zhao, Y Zhang, W Wang, C Yu… - arXiv preprint arXiv …, 2023 - arxiv.org
Information Extraction (IE) is an essential task in Natural Language Processing. Traditional
methods have relied on coarse-grained extraction with simple instructions. However, with …

Ielm: An open information extraction benchmark for pre-trained language models

C Wang, X Liu, D Song - arXiv preprint arXiv:2210.14128, 2022 - arxiv.org
We introduce a new open information extraction (OIE) benchmark for pre-trained language
models (LM). Recent studies have demonstrated that pre-trained LMs, such as BERT and …

Diluie: constructing diverse demonstrations of in-context learning with large language model for unified information extraction

Q Guo, Y Guo, J Zhao - Neural Computing and Applications, 2024 - Springer
Large language models (LLMs) have demonstrated promising in-context learning
capabilities, especially with instructive prompts. However, recent studies have shown that …

Metaie: Distilling a meta model from llm for all kinds of information extraction tasks

L Peng, Z Wang, F Yao, Z Wang, J Shang - arXiv preprint arXiv …, 2024 - arxiv.org
Information extraction (IE) is a fundamental area in natural language processing where
prompting large language models (LLMs), even with in-context examples, cannot defeat …