C Wang, X Liu, D Song - arXiv preprint arXiv:2010.11967, 2020 - arxiv.org
This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (eg, BERT, GPT-2/3), without human supervision. Popular KGs (eg, Wikidata, NELL) …
Large language models (LLMs) have shown remarkable generalization capability with exceptional performance in various language modeling tasks. However, they still exhibit …
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of natural language processing and artificial intelligence, due to their emergent ability …
This paper presents an exhaustive quantitative and qualitative evaluation of Large Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning. We …
D Zhang, Z Yuan, H Liu, H Xiong - … of the AAAI Conference on artificial …, 2022 - ojs.aaai.org
Graph walking based on reinforcement learning (RL) has shown great success in navigating an agent to automatically complete various reasoning tasks over an incomplete knowledge …
While large language models (LLMs) have made considerable advancements in understanding and generating unstructured text, their application in structured data remains …
Z Hou, X Jin, Z Li, L Bai - Findings of the Association for …, 2021 - aclanthology.org
Multi-hop reasoning is an effective and explainable approach to predicting missing facts in Knowledge Graphs (KGs). It usually adopts the Reinforcement Learning (RL) framework and …
Large language models (LLMs) have demonstrated human-level performance on a vast spectrum of natural language tasks. However, it is largely unexplored whether they can …
Y Xia, M Lan, J Luo, X Chen, G Zhou - Information Processing & …, 2022 - Elsevier
In recent years, reasoning over knowledge graphs (KGs) has been widely adapted to empower retrieval systems, recommender systems, and question answering systems …