Natural language reasoning, a survey

F Yu, H Zhang, P Tiwari, B Wang - ACM Computing Surveys, 2023 - dl.acm.org
This survey paper proposes a clearer view of natural language reasoning in the field of
Natural Language Processing (NLP), both conceptually and practically. Conceptually, we …

Large language models and knowledge graphs: Opportunities and challenges

JZ Pan, S Razniewski, JC Kalo, S Singhania… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) have taken Knowledge Representation--and the world--by
storm. This inflection point marks a shift from explicit knowledge representation to a renewed …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Symbolic knowledge distillation: from general language models to commonsense models

P West, C Bhagavatula, J Hessel, JD Hwang… - arXiv preprint arXiv …, 2021 - arxiv.org
The common practice for training commonsense models has gone from-human-to-corpus-to-
machine: humans author commonsense knowledge graphs in order to train commonsense …

Llms for knowledge graph construction and reasoning: Recent capabilities and future opportunities

Y Zhu, X Wang, J Chen, S Qiao, Y Ou, Y Yao, S Deng… - World Wide Web, 2024 - Springer
This paper presents an exhaustive quantitative and qualitative evaluation of Large
Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning. We …

Greaselm: Graph reasoning enhanced language models for question answering

X Zhang, A Bosselut, M Yasunaga, H Ren… - arXiv preprint arXiv …, 2022 - arxiv.org
Answering complex questions about textual narratives requires reasoning over both stated
context and the world knowledge that underlies it. However, pretrained language models …

Soda: Million-scale dialogue distillation with social commonsense contextualization

H Kim, J Hessel, L Jiang, P West, X Lu, Y Yu… - arXiv preprint arXiv …, 2022 - arxiv.org
We present SODA: the first publicly available, million-scale high-quality social dialogue
dataset. Using SODA, we train COSMO: a generalizable conversation agent outperforming …

Cem: Commonsense-aware empathetic response generation

S Sabour, C Zheng, M Huang - Proceedings of the AAAI Conference on …, 2022 - ojs.aaai.org
A key trait of daily conversations between individuals is the ability to express empathy
towards others, and exploring ways to implement empathy is a crucial step towards human …

Mapping language models to grounded conceptual spaces

R Patel, E Pavlick - International conference on learning …, 2022 - openreview.net
A fundamental criticism of text-only language models (LMs) is their lack of grounding---that
is, the ability to tie a word for which they have learned a representation, to its actual use in …

Benchmarks for automated commonsense reasoning: A survey

E Davis - ACM Computing Surveys, 2023 - dl.acm.org
More than one hundred benchmarks have been developed to test the commonsense
knowledge and commonsense reasoning abilities of artificial intelligence (AI) systems …