Uncertainty-based continual learning with adaptive regularization

H Ahn, S Cha, D Lee, T Moon - Advances in neural …, 2019 - proceedings.neurips.cc
We introduce a new neural network-based continual learning algorithm, dubbed as
Uncertainty-regularized Continual Learning (UCL), which builds on traditional Bayesian …

Continual learning via bit-level information preserving

Y Shi, L Yuan, Y Chen, J Feng - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Continual learning tackles the setting of learning different tasks sequentially. Despite the lots
of previous solutions, most of them still suffer significant forgetting or expensive memory …

Continual learning with node-importance based adaptive group sparse regularization

S Jung, H Ahn, S Cha, T Moon - Advances in neural …, 2020 - proceedings.neurips.cc
We propose a novel regularization-based continual learning method, dubbed as Adaptive
Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based …

[PDF][PDF] Uncertainty-guided continual learning in Bayesian neural networks–Extended abstract

S Ebrahimi, M Elhoseiny, T Darrell… - Proc. IEEE Conf …, 2018 - openaccess.thecvf.com
Continual learning aims to learn new tasks without forgetting previously learned ones. This
is especially challenging when one cannot access data from previous tasks and when the …

Learning to prompt for continual learning

Z Wang, Z Zhang, CY Lee, H Zhang… - Proceedings of the …, 2022 - openaccess.thecvf.com
The mainstream paradigm behind continual learning has been to adapt the model
parameters to non-stationary data distributions, where catastrophic forgetting is the central …

Task-agnostic continual learning using online variational bayes with fixed-point updates

C Zeno, I Golan, E Hoffer, D Soudry - Neural Computation, 2021 - direct.mit.edu
Catastrophic forgetting is the notorious vulnerability of neural networks to the changes in the
data distribution during learning. This phenomenon has long been considered a major …

Improving and understanding variational continual learning

S Swaroop, CV Nguyen, TD Bui, RE Turner - arXiv preprint arXiv …, 2019 - arxiv.org
In the continual learning setting, tasks are encountered sequentially. The goal is to learn
whilst i) avoiding catastrophic forgetting, ii) efficiently using model capacity, and iii) …

Generalized variational continual learning

N Loo, S Swaroop, RE Turner - arXiv preprint arXiv:2011.12328, 2020 - arxiv.org
Continual learning deals with training models on new tasks and datasets in an online
fashion. One strand of research has used probabilistic regularization for continual learning …

A closer look at rehearsal-free continual learning

JS Smith, J Tian, S Halbe, YC Hsu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Continual learning is a setting where machine learning models learn novel concepts from
continuously shifting training data, while simultaneously avoiding degradation of knowledge …

Achieving a better stability-plasticity trade-off via auxiliary networks in continual learning

S Kim, L Noci, A Orvieto… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In contrast to the natural capabilities of humans to learn new tasks in a sequential fashion,
neural networks are known to suffer from catastrophic forgetting, where the model's …