The information stored in large language models (LLMs) falls out of date quickly, and retraining from scratch is often not an option. This has recently given rise to a range of …
This survey provides an in-depth analysis of knowledge conflicts for large language models (LLMs), highlighting the complex challenges they encounter when blending contextual and …
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP, aimed at addressing limitations in existing frameworks while aligning with …
ChatGPT has gained a huge popularity since its introduction. Its positive aspects have been reported through many media platforms, and some analyses even showed that ChatGPT …
We propose a novel approach to conformal prediction for generative language models (LMs). Standard conformal prediction produces prediction sets--in place of single predictions …
X Liang, S Song, Z Zheng, H Wang, Q Yu, X Li… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) often exhibit deficient reasoning or generate hallucinations. To address these, studies prefixed with" Self-" such as Self-Consistency, Self-Improve, and …
Multilingual large-scale Pretrained Language Models (PLMs) have been shown to store considerable amounts of factual knowledge, but large variations are observed across …
K Ellis - Advances in Neural Information Processing …, 2023 - proceedings.neurips.cc
A core tension in models of concept learning is that the model must carefully balance the tractability of inference against the expressivity of the hypothesis class. Humans, however …
As of September 2023, ChatGPT correctly answers" what is 7+ 8" with 15, but when asked" 7+ 8= 15, True or False" it responds with" False". This inconsistency between generating …