Recent advances of foundation language models-based continual learning: A survey

Y Yang, J Zhou, X Ding, T Huai, S Liu, Q Chen… - ACM Computing …, 2024 - dl.acm.org
Recently, foundation language models (LMs) have marked significant achievements in the
domains of natural language processing (NLP) and computer vision (CV). Unlike traditional …

Hireview: Hierarchical taxonomy-driven automatic literature review generation

Y Hu, Z Li, Z Zhang, C Ling, R Kanjiani, B Zhao… - arXiv preprint arXiv …, 2024 - arxiv.org
In this work, we present HiReview, a novel framework for hierarchical taxonomy-driven
automatic literature review generation. With the exponential growth of academic documents …

Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models

B Li, H Liang, Y Li, F Fu, H Yin, C He… - arXiv preprint arXiv …, 2024 - arxiv.org
During the pretraining phase, large language models (LLMs) acquire vast amounts of
knowledge from extensive text corpora. Nevertheless, in later stages such as fine-tuning and …

Can Large Language Models Understand DL-Lite Ontologies? An Empirical Study

K Wang, G Qi, J Li, S Zhai - arXiv preprint arXiv:2406.17532, 2024 - arxiv.org
Large language models (LLMs) have shown significant achievements in solving a wide
range of tasks. Recently, LLMs' capability to store, retrieve and infer with symbolic …

Recent Advances of Multimodal Continual Learning: A Comprehensive Survey

D Yu, X Zhang, Y Chen, A Liu, Y Zhang, PS Yu… - arXiv preprint arXiv …, 2024 - arxiv.org
Continual learning (CL) aims to empower machine learning models to learn continually from
new data, while building upon previously acquired knowledge without forgetting. As …

CMT: A Memory Compression Method for Continual Knowledge Learning of Large Language Models

D Li, Z Sun, X Hu, B Hu, M Zhang - arXiv preprint arXiv:2412.07393, 2024 - arxiv.org
Large Language Models (LLMs) need to adapt to the continuous changes in data, tasks, and
user preferences. Due to their massive size and the high costs associated with training …

Lifelong Learning of Large Language Model based Agents: A Roadmap

J Zheng, C Shi, X Cai, Q Li, D Zhang, C Li, D Yu… - arXiv preprint arXiv …, 2025 - arxiv.org
Lifelong learning, also known as continual or incremental learning, is a crucial component
for advancing Artificial General Intelligence (AGI) by enabling systems to continuously adapt …

What Limits LLM-based Human Simulation: LLMs or Our Design?

Q Wang, J Wu, Z Tang, B Luo, N Chen, W Chen… - arXiv preprint arXiv …, 2025 - arxiv.org
We argue that advancing LLM-based human simulation requires addressing both LLM's
inherent limitations and simulation framework design challenges. Recent studies have …

Reviving Dormant Memories: Investigating Catastrophic Forgetting in Language Models through Rationale-Guidance Difficulty

H Sun, Y Gao - arXiv preprint arXiv:2411.11932, 2024 - arxiv.org
Although substantial efforts have been made to mitigate catastrophic forgetting in continual
learning, the intrinsic mechanisms are not well understood. In this paper, we discover that …

Co-evolved Self-Critique: Enhancing Large Language Models with Self-Generated Data

X Hu, J Zhang, X Chen, J Wang, X Cai, X Zhan - openreview.net
Large language models (LLMs) have seen staggering progress in recent years.
Contemporary LLMs rely on an immense amount of data for training, however, as LLMs …