A closer look at the self-verification abilities of large language models in logical reasoning

R Hong, H Zhang, X Pang, D Yu, C Zhang - arXiv preprint arXiv …, 2023 - arxiv.org
Logical reasoning has been an ongoing pursuit in the field of AI. Despite significant
advancements made by large language models (LLMs), they still struggle with complex …

Clongeval: A chinese benchmark for evaluating long-context large language models

Z Qiu, J Li, S Huang, W Zhong, I King - arXiv preprint arXiv:2403.03514, 2024 - arxiv.org
Developing Large Language Models (LLMs) with robust long-context capabilities has been
the recent research focus, resulting in the emergence of long-context LLMs proficient in …

IDOL: indicator-oriented logic pre-training for logical reasoning

Z Xu, Z Yang, Y Cui, S Wang - arXiv preprint arXiv:2306.15273, 2023 - arxiv.org
In the field of machine reading comprehension (MRC), existing systems have surpassed the
average performance of human beings in many tasks like SQuAD. However, there is still a …

Harnessing Knowledge and Reasoning for Human-Like Natural Language Generation: A Brief Review

J Chen, Y Xiao - arXiv preprint arXiv:2212.03747, 2022 - arxiv.org
The rapid development and application of natural language generation (NLG) techniques
has revolutionized the field of automatic text production. However, these techniques are still …

Determlr: Augmenting llm-based logical reasoning from indeterminacy to determinacy

H Sun, W Xu, W Liu, J Luan, B Wang… - Proceedings of the …, 2024 - aclanthology.org
Recent advances in large language models (LLMs) have revolutionized the landscape of
reasoning tasks. To enhance the capabilities of LLMs to emulate human reasoning, prior …

Unifying structure reasoning and language model pre-training for complex reasoning

S Wang, Z Wei, J Xu, T Li, Z Fan - arXiv preprint arXiv:2301.08913, 2023 - arxiv.org
Recent pre-trained language models (PLMs) equipped with foundation reasoning skills
have shown remarkable performance on downstream complex tasks. However, the …

LogiTorch: A PyTorch-based library for logical reasoning on natural language

C Helwe, C Clavel, F Suchanek - Proceedings of the 2022 …, 2022 - aclanthology.org
Logical reasoning on natural language is one of the most challenging tasks for deep
learning models. There has been an increasing interest in developing new benchmarks to …

DaGATN: A Type of Machine Reading Comprehension Based on Discourse-Apperceptive Graph Attention Networks

M Wu, T Sun, Z Wang, J Duan - Applied Sciences, 2023 - mdpi.com
In recent years, with the advancement of natural language processing techniques and the
release of models like ChatGPT, how language models understand questions has become a …

Disentangling reasoning capabilities from language models with compositional reasoning transformers

W Zhong, T Ma, J Wang, J Yin, T Zhao, CY Lin… - arXiv preprint arXiv …, 2022 - arxiv.org
This paper presents ReasonFormer, a unified reasoning framework for mirroring the
modular and compositional reasoning process of humans in complex decision-making …

Unifying Structure Reasoning and Language Pre-Training for Complex Reasoning Tasks

S Wang, Z Wei, J Xu, T Li, Z Fan - IEEE/ACM Transactions on …, 2023 - ieeexplore.ieee.org
Recent pre-trained language models (PLMs) equipped with foundation reasoning skills
have shown remarkable performance on downstream complex tasks. However, the …