Regularization-Based Efficient Continual Learning in Deep State-Space Models

Y Zhang, Z Lin, Y Sun, F Yin, C Fritsche - arXiv preprint arXiv:2403.10123, 2024 - arxiv.org
Deep state-space models (DSSMs) have gained popularity in recent years due to their
potent modeling capacity for dynamic systems. However, existing DSSM works are limited to …

Scalable and order-robust continual learning with additive parameter decomposition

J Yoon, S Kim, E Yang, SJ Hwang - arXiv preprint arXiv:1902.09432, 2019 - arxiv.org
While recent continual learning methods largely alleviate the catastrophic problem on toy-
sized datasets, some issues remain to be tackled to apply them to real-world problem …

Sparcl: Sparse continual learning on the edge

Z Wang, Z Zhan, Y Gong, G Yuan… - Advances in …, 2022 - proceedings.neurips.cc
Existing work in continual learning (CL) focuses on mitigating catastrophic forgetting, ie,
model performance deterioration on past tasks when learning a new task. However, the …

Gcr: Gradient coreset based replay buffer selection for continual learning

R Tiwari, K Killamsetty, R Iyer… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …

Self-attention meta-learner for continual learning

G Sokar, DC Mocanu, M Pechenizkiy - arXiv preprint arXiv:2101.12136, 2021 - arxiv.org
Continual learning aims to provide intelligent agents capable of learning multiple tasks
sequentially with neural networks. One of its main challenging, catastrophic forgetting, is …

Improving Data-aware and Parameter-aware Robustness for Continual Learning

H Xiao, F Lyu - arXiv preprint arXiv:2405.17054, 2024 - arxiv.org
The goal of Continual Learning (CL) task is to continuously learn multiple new tasks
sequentially while achieving a balance between the plasticity and stability of new and old …

Hessian Aware Low-Rank Weight Perturbation for Continual Learning

J Li, R Wang, Y Lai, C Shui, S Sahoo, CX Ling… - arXiv preprint arXiv …, 2023 - arxiv.org
Continual learning aims to learn a series of tasks sequentially without forgetting the
knowledge acquired from the previous ones. In this work, we propose the Hessian Aware …

Nispa: Neuro-inspired stability-plasticity adaptation for continual learning in sparse networks

MB Gurbuz, C Dovrolis - arXiv preprint arXiv:2206.09117, 2022 - arxiv.org
The goal of continual learning (CL) is to learn different tasks over time. The main desiderata
associated with CL are to maintain performance on older tasks, leverage the latter to …

[HTML][HTML] Spacenet: Make free space for continual learning

G Sokar, DC Mocanu, M Pechenizkiy - Neurocomputing, 2021 - Elsevier
The continual learning (CL) paradigm aims to enable neural networks to learn tasks
continually in a sequential fashion. The fundamental challenge in this learning paradigm is …

Self-evolved dynamic expansion model for task-free continual learning

F Ye, AG Bors - Proceedings of the IEEE/CVF International …, 2023 - openaccess.thecvf.com
Abstract Task-Free Continual Learning (TFCL) aims to learn new concepts from a stream of
data without any task information. The Dynamic Expansion Model (DEM) has shown …