Natural language reasoning, a survey

F Yu, H Zhang, P Tiwari, B Wang - ACM Computing Surveys, 2023 - dl.acm.org
This survey paper proposes a clearer view of natural language reasoning in the field of
Natural Language Processing (NLP), both conceptually and practically. Conceptually, we …

State-of-the-art generalisation research in NLP: a taxonomy and review

D Hupkes, M Giulianelli, V Dankers, M Artetxe… - arXiv preprint arXiv …, 2022 - arxiv.org
The ability to generalise well is one of the primary desiderata of natural language
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …

Proofver: Natural logic theorem proving for fact verification

A Krishna, S Riedel, A Vlachos - Transactions of the Association for …, 2022 - direct.mit.edu
Fact verification systems typically rely on neural network classifiers for veracity prediction,
which lack explainability. This paper proposes ProoFVer, which uses a seq2seq model to …

Natural language deduction through search over statement compositions

K Bostrom, Z Sprague, S Chaudhuri… - arXiv preprint arXiv …, 2022 - arxiv.org
In settings from fact-checking to question answering, we frequently want to know whether a
collection of evidence (premises) entails a hypothesis. Existing methods primarily focus on …

Fairr: Faithful and robust deductive reasoning over natural language

S Sanyal, H Singh, X Ren - arXiv preprint arXiv:2203.10261, 2022 - arxiv.org
Transformers have been shown to be able to perform deductive reasoning on a logical
rulebase containing rules and statements written in natural language. Recent works show …

Summarization programs: Interpretable abstractive summarization with neural modular trees

S Saha, S Zhang, P Hase, M Bansal - arXiv preprint arXiv:2209.10492, 2022 - arxiv.org
Current abstractive summarization models either suffer from a lack of clear interpretability or
provide incomplete rationales by only highlighting parts of the source document. To this end …

Metgen: A module-based entailment tree generation framework for answer explanation

R Hong, H Zhang, X Yu, C Zhang - arXiv preprint arXiv:2205.02593, 2022 - arxiv.org
Knowing the reasoning chains from knowledge to the predicted answers can help construct
an explainable question answering (QA) system. Advances on QA explanation propose to …

RECKONING: reasoning through dynamic knowledge encoding

Z Chen, G Weiss, E Mitchell… - Advances in Neural …, 2024 - proceedings.neurips.cc
Recent studies on transformer-based language models show that they can answer
questions by reasoning over knowledge provided as part of the context (ie, in-context …

Abductionrules: Training transformers to explain unexpected inputs

N Young, Q Bao, J Bensemann, M Witbrock - arXiv preprint arXiv …, 2022 - arxiv.org
Transformers have recently been shown to be capable of reliably performing logical
reasoning over facts and rules expressed in natural language, but abductive reasoning …

Logical reasoning over natural language as knowledge representation: A survey

Z Yang, X Du, R Mao, J Ni, E Cambria - arXiv preprint arXiv:2303.12023, 2023 - arxiv.org
Logical reasoning is central to human cognition and intelligence. It includes deductive,
inductive, and abductive reasoning. Past research of logical reasoning within AI uses formal …