A Principled Framework for Knowledge-enhanced Large Language Model

S Wang, Z Liu, Z Wang, J Guo - arXiv preprint arXiv:2311.11135, 2023 - arxiv.org
Large Language Models (LLMs) are versatile, yet they often falter in tasks requiring deep
and reliable reasoning due to issues like hallucinations, limiting their applicability in critical …

Disentangling memory and reasoning ability in large language models

M Jin, W Luo, S Cheng, X Wang, W Hua… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have demonstrated strong performance in handling
complex tasks requiring both extensive knowledge and reasoning abilities. However, the …

Knowledge fusion of large language models

F Wan, X Huang, D Cai, X Quan, W Bi, S Shi - arXiv preprint arXiv …, 2024 - arxiv.org
While training large language models (LLMs) from scratch can generate models with distinct
functionalities and strengths, it comes at significant costs and may result in redundant …

Learn to Refuse: Making Large Language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism

L Cao - arXiv preprint arXiv:2311.01041, 2023 - arxiv.org
Large language models (LLMs) have demonstrated impressive language understanding
and generation capabilities, enabling them to answer a wide range of questions across …

From Static to Dynamic: Knowledge Metabolism for Large Language Models

M Du, AT Luu, B Ji, SK Ng - Proceedings of the AAAI Conference on …, 2024 - ojs.aaai.org
The immense parameter space of Large Language Models (LLMs) endows them with
superior knowledge retention capabilities, allowing them to excel in a variety of natural …

Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance

Y Fu, L Ou, M Chen, Y Wan, H Peng, T Khot - arXiv preprint arXiv …, 2023 - arxiv.org
As large language models (LLMs) are continuously being developed, their evaluation
becomes increasingly important yet challenging. This work proposes Chain-of-Thought Hub …

Attention-Driven Reasoning: Unlocking the Potential of Large Language Models

B Liao, DV Vargas - arXiv preprint arXiv:2403.14932, 2024 - arxiv.org
Large Language Models (LLMs) have shown remarkable capabilities, but their reasoning
abilities and underlying mechanisms remain poorly understood. We present a novel …

ALCUNA: Large language models meet new knowledge

X Yin, B Huang, X Wan - arXiv preprint arXiv:2310.14820, 2023 - arxiv.org
With the rapid development of NLP, large-scale language models (LLMs) excel in various
tasks across multiple domains now. However, existing benchmarks may not adequately …

How do large language models capture the ever-changing world knowledge? a review of recent advances

Z Zhang, M Fang, L Chen, MR Namazi-Rad… - arXiv preprint arXiv …, 2023 - arxiv.org
Although large language models (LLMs) are impressive in solving various tasks, they can
quickly be outdated after deployment. Maintaining their up-to-date status is a pressing …

Decoding Knowledge in Large Language Models: A Framework for Categorization and Comprehension

Y Fang, R Tang - arXiv preprint arXiv:2501.01332, 2025 - arxiv.org
Understanding how large language models (LLMs) acquire, retain, and apply knowledge
remains an open challenge. This paper introduces a novel framework, K-(CSA)^ 2, which …