Revisiting class-incremental learning with pre-trained models: Generalizability and adaptivity are all you need

DW Zhou, ZW Cai, HJ Ye, DC Zhan, Z Liu - arXiv preprint arXiv …, 2023 - arxiv.org
Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting
old ones. Traditional CIL models are trained from scratch to continually acquire knowledge …

Class-incremental learning: A survey

DW Zhou, QW Wang, ZH Qi, HJ Ye… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep models, eg, CNNs and Vision Transformers, have achieved impressive achievements
in many vision tasks in the closed world. However, novel classes emerge from time to time in …

Ranpac: Random projections and pre-trained models for continual learning

MD McDonnell, D Gong, A Parvaneh… - Advances in …, 2024 - proceedings.neurips.cc
Continual learning (CL) aims to incrementally learn different tasks (such as classification) in
a non-stationary data stream without forgetting old ones. Most CL works focus on tackling …

Speciality vs generality: An empirical study on catastrophic forgetting in fine-tuning foundation models

Y Lin, L Tan, H Lin, Z Zheng, R Pi, J Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Foundation models, including Vision Language Models (VLMs) and Large Language
Models (LLMs), possess the $ generality $ to handle diverse distributions and tasks, which …

CLR: Channel-wise lightweight reprogramming for continual learning

Y Ge, Y Li, S Ni, J Zhao… - Proceedings of the …, 2023 - openaccess.thecvf.com
Continual learning aims to emulate the human ability to continually accumulate knowledge
over sequential tasks. The main challenge is to maintain performance on previously learned …

Balancing speciality and versatility: a coarse to fine framework for supervised fine-tuning large language model

H Zhang, Y Wu, D Li, S Yang, R Zhao, Y Jiang… - arXiv preprint arXiv …, 2024 - arxiv.org
Aligned Large Language Models (LLMs) showcase remarkable versatility, capable of
handling diverse real-world tasks. Meanwhile, aligned LLMs are also expected to exhibit …

Investigating forgetting in pre-trained representations through continual learning

Y Luo, Z Yang, X Bai, F Meng, J Zhou… - arXiv preprint arXiv …, 2023 - arxiv.org
Representation forgetting refers to the drift of contextualized representations during
continual training. Intuitively, the representation forgetting can influence the general …

On the usage of continual learning for out-of-distribution generalization in pre-trained language models of code

M Weyssow, X Zhou, K Kim, D Lo… - Proceedings of the 31st …, 2023 - dl.acm.org
Pre-trained language models (PLMs) have become a prevalent technique in deep learning
for code, utilizing a two-stage pre-training and fine-tuning procedure to acquire general …

Premonition: Using generative models to preempt future data changes in continual learning

MD McDonnell, D Gong, E Abbasnejad… - arXiv preprint arXiv …, 2024 - arxiv.org
Continual learning requires a model to adapt to ongoing changes in the data distribution,
and often to the set of tasks to be performed. It is rare, however, that the data and task …

Information Bottleneck Based Data Correction in Continual Learning

S Chen, M Zhang, J Zhang, K Huang - European Conference on Computer …, 2025 - Springer
Continual Learning (CL) requires model to retain previously learned knowledge while
learning new tasks. Recently, experience replay-based methods have made significant …