In this work, we present HiReview, a novel framework for hierarchical taxonomy-driven automatic literature review generation. With the exponential growth of academic documents …
B Li, H Liang, Y Li, F Fu, H Yin, C He… - arXiv preprint arXiv …, 2024 - arxiv.org
During the pretraining phase, large language models (LLMs) acquire vast amounts of knowledge from extensive text corpora. Nevertheless, in later stages such as fine-tuning and …
Large language models (LLMs) have shown significant achievements in solving a wide range of tasks. Recently, LLMs' capability to store, retrieve and infer with symbolic …
Continual learning (CL) aims to empower machine learning models to learn continually from new data, while building upon previously acquired knowledge without forgetting. As …
D Li, Z Sun, X Hu, B Hu, M Zhang - arXiv preprint arXiv:2412.07393, 2024 - arxiv.org
Large Language Models (LLMs) need to adapt to the continuous changes in data, tasks, and user preferences. Due to their massive size and the high costs associated with training …
J Zheng, C Shi, X Cai, Q Li, D Zhang, C Li, D Yu… - arXiv preprint arXiv …, 2025 - arxiv.org
Lifelong learning, also known as continual or incremental learning, is a crucial component for advancing Artificial General Intelligence (AGI) by enabling systems to continuously adapt …
Q Wang, J Wu, Z Tang, B Luo, N Chen, W Chen… - arXiv preprint arXiv …, 2025 - arxiv.org
We argue that advancing LLM-based human simulation requires addressing both LLM's inherent limitations and simulation framework design challenges. Recent studies have …
H Sun, Y Gao - arXiv preprint arXiv:2411.11932, 2024 - arxiv.org
Although substantial efforts have been made to mitigate catastrophic forgetting in continual learning, the intrinsic mechanisms are not well understood. In this paper, we discover that …
X Hu, J Zhang, X Chen, J Wang, X Cai, X Zhan - openreview.net
Large language models (LLMs) have seen staggering progress in recent years. Contemporary LLMs rely on an immense amount of data for training, however, as LLMs …