BY Lin, S Lee, R Khanna, X Ren - arXiv preprint arXiv:2005.00683, 2020 - arxiv.org
Recent works show that pre-trained language models (PTLMs), such as BERT, possess certain commonsense and factual knowledge. They suggest that it is promising to use …
P Bhargava, V Ng - Proceedings of the AAAI Conference on Artificial …, 2022 - ojs.aaai.org
While commonsense knowledge acquisition and reasoning has traditionally been a core research topic in the knowledge representation and reasoning community, recent years …
We present a new probing dataset named PROST: Physical Reasoning about Objects Through Space and Time. This dataset contains 18,736 multiple-choice questions made …
K Kondo, S Sugawara, A Aizawa - arXiv preprint arXiv:2306.02258, 2023 - arxiv.org
In this study, we create a CConS (Counter-commonsense Contextual Size comparison) dataset to investigate how physical commonsense affects the contextualized size …
D Panas, S Seth, V Belle - … Conference on Neural-Symbolic Learning and …, 2024 - Springer
Two major areas of interest in the era of Large Language Models regard questions of what do LLMs know, and if and how they may be able to reason, or rather, approximately reason …
Current Artificial Intelligence (AI) methods, most based on deep learning, have facilitated progress in several fields, including computer vision and natural language understanding …
To understand language, machines need to be cognizant of the implied commonsense knowledge underlying the text. Observations obtained from real-world scenarios contain …