The ability to generalise well is one of the primary desiderata of natural language processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …
Fact verification systems typically rely on neural network classifiers for veracity prediction, which lack explainability. This paper proposes ProoFVer, which uses a seq2seq model to …
In settings from fact-checking to question answering, we frequently want to know whether a collection of evidence (premises) entails a hypothesis. Existing methods primarily focus on …
Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in natural language. Recent works show …
Current abstractive summarization models either suffer from a lack of clear interpretability or provide incomplete rationales by only highlighting parts of the source document. To this end …
R Hong, H Zhang, X Yu, C Zhang - arXiv preprint arXiv:2205.02593, 2022 - arxiv.org
Knowing the reasoning chains from knowledge to the predicted answers can help construct an explainable question answering (QA) system. Advances on QA explanation propose to …
Recent studies on transformer-based language models show that they can answer questions by reasoning over knowledge provided as part of the context (ie, in-context …
Transformers have recently been shown to be capable of reliably performing logical reasoning over facts and rules expressed in natural language, but abductive reasoning …
Logical reasoning is central to human cognition and intelligence. It includes deductive, inductive, and abductive reasoning. Past research of logical reasoning within AI uses formal …