Layerwise optimization by gradient decomposition for continual learning

S Tang, D Chen, J Zhu, S Yu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Deep neural networks achieve state-of-the-art and sometimes super-human performance
across a variety of domains. However, when learning tasks sequentially, the networks easily …

InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning

YS Liang, WJ Li - Proceedings of the IEEE/CVF Conference …, 2024 - openaccess.thecvf.com
Continual learning requires the model to learn multiple tasks sequentially. In continual
learning the model should possess the ability to maintain its performance on old tasks …

Training networks in null space of feature covariance for continual learning

S Wang, X Li, J Sun, Z Xu - … of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
In the setting of continual learning, a network is trained on a sequence of tasks, and suffers
from catastrophic forgetting. To balance plasticity and stability of network in continual …

Dytox: Transformers for continual learning with dynamic token expansion

A Douillard, A Ramé, G Couairon… - Proceedings of the …, 2022 - openaccess.thecvf.com
Deep network architectures struggle to continually learn new tasks without forgetting the
previous tasks. A recent trend indicates that dynamic architectures based on an expansion …

Data augmented flatness-aware gradient projection for continual learning

E Yang, L Shen, Z Wang, S Liu… - Proceedings of the …, 2023 - openaccess.thecvf.com
The goal of continual learning (CL) is to continuously learn new tasks without forgetting
previously learned old tasks. To alleviate catastrophic forgetting, gradient projection based …

Bilevel coreset selection in continual learning: A new formulation and algorithm

J Hao, K Ji, M Liu - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Coreset is a small set that provides a data summary for a large dataset, such that training
solely on the small set achieves competitive performance compared with a large dataset. In …

Balancing stability and plasticity through advanced null space in continual learning

Y Kong, L Liu, Z Wang, D Tao - European Conference on Computer Vision, 2022 - Springer
Continual learning is a learning paradigm that learns tasks sequentially with resources
constraints, in which the key challenge is stability-plasticity dilemma, ie, it is uneasy to …

Self-paced weight consolidation for continual learning

W Cong, Y Cong, G Sun, Y Liu… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Continual learning algorithms which keep the parameters of new tasks close to that of
previous tasks, are popular in preventing catastrophic forgetting in sequential task learning …

Scalable and order-robust continual learning with additive parameter decomposition

J Yoon, S Kim, E Yang, SJ Hwang - arXiv preprint arXiv:1902.09432, 2019 - arxiv.org
While recent continual learning methods largely alleviate the catastrophic problem on toy-
sized datasets, some issues remain to be tackled to apply them to real-world problem …

Self-evolved dynamic expansion model for task-free continual learning

F Ye, AG Bors - Proceedings of the IEEE/CVF International …, 2023 - openaccess.thecvf.com
Abstract Task-Free Continual Learning (TFCL) aims to learn new concepts from a stream of
data without any task information. The Dynamic Expansion Model (DEM) has shown …