A review on language models as knowledge bases

B AlKhamissi, M Li, A Celikyilmaz, M Diab… - arXiv preprint arXiv …, 2022 - arxiv.org
Recently, there has been a surge of interest in the NLP community on the use of pretrained
Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs …

Natural language reasoning, a survey

F Yu, H Zhang, P Tiwari, B Wang - ACM Computing Surveys, 2023 - dl.acm.org
This survey paper proposes a clearer view of natural language reasoning in the field of
Natural Language Processing (NLP), both conceptually and practically. Conceptually, we …

Selection-inference: Exploiting large language models for interpretable logical reasoning

A Creswell, M Shanahan, I Higgins - arXiv preprint arXiv:2205.09712, 2022 - arxiv.org
Large language models (LLMs) have been shown to be capable of impressive few-shot
generalisation to new tasks. However, they still tend to perform poorly on multi-step logical …

A comprehensive survey on applications of transformers for deep learning tasks

S Islam, H Elmekki, A Elsebai, J Bentahar… - Expert Systems with …, 2023 - Elsevier
Abstract Transformers are Deep Neural Networks (DNN) that utilize a self-attention
mechanism to capture contextual relationships within sequential data. Unlike traditional …

ProofWriter: Generating implications, proofs, and abductive statements over natural language

O Tafjord, BD Mishra, P Clark - arXiv preprint arXiv:2012.13048, 2020 - arxiv.org
Transformers have been shown to emulate logical deduction over natural language theories
(logical rules expressed in natural language), reliably assigning true/false labels to …

Explaining answers with entailment trees

B Dalvi, P Jansen, O Tafjord, Z Xie, H Smith… - arXiv preprint arXiv …, 2021 - arxiv.org
Our goal, in the context of open-domain textual question-answering (QA), is to explain
answers by showing the line of reasoning from what is known to the answer, rather than …

Unveiling transformers with lego: a synthetic reasoning task

Y Zhang, A Backurs, S Bubeck, R Eldan… - arXiv preprint arXiv …, 2022 - arxiv.org
We propose a synthetic reasoning task, LEGO (Learning Equality and Group Operations),
that encapsulates the problem of following a chain of reasoning, and we study how the …

Naturalprover: Grounded mathematical proof generation with language models

S Welleck, J Liu, X Lu, H Hajishirzi… - Advances in Neural …, 2022 - proceedings.neurips.cc
Theorem proving in natural mathematical language–the mixture of symbolic and natural
language used by humans–plays a central role in mathematical advances and education …

Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies

L Pan, M Saxon, W Xu, D Nathani, X Wang… - Transactions of the …, 2024 - direct.mit.edu
While large language models (LLMs) have shown remarkable effectiveness in various NLP
tasks, they are still prone to issues such as hallucination, unfaithful reasoning, and toxicity. A …

[引用][C] Reasoning with transformer-based models: Deep learning, but shallow reasoning

C Helwe, C Clavel, F Suchanek - International Conference on …, 2021 - imt.hal.science
Recent years have seen impressive performance of transformer-based models on different
natural language processing tasks. However, it is not clear to what degree the transformers …