Pretraining a language model (LM) on text has been shown to help various downstream NLP tasks. Recent works show that a knowledge graph (KG) can complement text data …
Role-play has long been used as an active learning technique in educational and training contexts. In particular, Edu-larp (a structured, live action roleplay experience that teaches …
Large language models (LLMs) have shown remarkable generalization capability with exceptional performance in various language modeling tasks. However, they still exhibit …
Knowledge underpins reasoning. Recent research demonstrates that when relevant knowledge is provided as additional context to commonsense question answering (QA), it …
Y Huang, Y Li, Y Xu, L Zhang, R Gan… - Proceedings of the …, 2023 - aclanthology.org
Recent advances in pre-trained language models (PLMs) have facilitated the development ofcommonsense reasoning tasks. However, existing methods rely on multi-hop …
P Bhargava, V Ng - Proceedings of the AAAI Conference on Artificial …, 2022 - ojs.aaai.org
While commonsense knowledge acquisition and reasoning has traditionally been a core research topic in the knowledge representation and reasoning community, recent years …
Most of today's AI systems focus on using self-attention mechanisms and transformer architectures on large amounts of diverse data to achieve impressive performance gains. In …
Knowledge in NLP has been a rising trend especially after the advent of large-scale pre- trained models. Knowledge is critical to equip statistics-based models with common sense …
Recently, Retrieval-Augmented Generation (RAG) has achieved remarkable success in addressing the challenges of Large Language Models (LLMs) without necessitating …