Machine knowledge: Creation and curation of comprehensive knowledge bases

G Weikum, XL Dong, S Razniewski… - … and Trends® in …, 2021 - nowpublishers.com
Equipping machines with comprehensive knowledge of the world's entities and their
relationships has been a longstanding goal of AI. Over the last decade, large-scale …

Information extraction from electronic medical documents: state of the art and future research directions

MY Landolsi, L Hlaoua, L Ben Romdhane - Knowledge and Information …, 2023 - Springer
In the medical field, a doctor must have a comprehensive knowledge by reading and writing
narrative documents, and he is responsible for every decision he takes for patients …

Lasuie: Unifying information extraction with latent adaptive structure-aware generative language model

H Fei, S Wu, J Li, B Li, F Li, L Qin… - Advances in …, 2022 - proceedings.neurips.cc
Universally modeling all typical information extraction tasks (UIE) with one generative
language model (GLM) has revealed great potential by the latest study, where various IE …

Graph neural networks for natural language processing: A survey

L Wu, Y Chen, K Shen, X Guo, H Gao… - … and Trends® in …, 2023 - nowpublishers.com
Deep learning has become the dominant approach in addressing various tasks in Natural
Language Processing (NLP). Although text inputs are typically represented as a sequence …

Clip-event: Connecting text and images with event structures

M Li, R Xu, S Wang, L Zhou, X Lin… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract Vision-language (V+ L) pretraining models have achieved great success in
supporting multimedia applications by understanding the alignments between images and …

Matching the blanks: Distributional similarity for relation learning

LB Soares, N FitzGerald, J Ling… - arXiv preprint arXiv …, 2019 - arxiv.org
General purpose relation extractors, which can model arbitrary relations, are a core
aspiration in information extraction. Efforts have been made to build general purpose …

FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization

E Durmus, H He, M Diab - arXiv preprint arXiv:2005.03754, 2020 - arxiv.org
Neural abstractive summarization models are prone to generate content inconsistent with
the source document, ie unfaithful. Existing automatic metrics do not capture such mistakes …

GSum: A general framework for guided neural abstractive summarization

ZY Dou, P Liu, H Hayashi, Z Jiang, G Neubig - arXiv preprint arXiv …, 2020 - arxiv.org
Neural abstractive summarization models are flexible and can produce coherent summaries,
but they are sometimes unfaithful and can be difficult to control. While previous studies …

PAQ: 65 million probably-asked questions and what you can do with them

P Lewis, Y Wu, L Liu, P Minervini, H Küttler… - Transactions of the …, 2021 - direct.mit.edu
Abstract Open-domain Question Answering models that directly leverage question-answer
(QA) pairs, such as closed-book QA (CBQA) models and QA-pair retrievers, show promise in …

Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training

O Agarwal, H Ge, S Shakeri, R Al-Rfou - arXiv preprint arXiv:2010.12688, 2020 - arxiv.org
Prior work on Data-To-Text Generation, the task of converting knowledge graph (KG) triples
into natural text, focused on domain-specific benchmark datasets. In this paper, however, we …