Gdumb: A simple approach that questions our progress in continual learning

A Prabhu, PHS Torr, PK Dokania - … , Glasgow, UK, August 23–28, 2020 …, 2020 - Springer
We discuss a general formulation for the Continual Learning (CL) problem for classification—
a learning task where a stream provides samples to a learner and the goal of the learner …

CPR: classifier-projection regularization for continual learning

S Cha, H Hsu, T Hwang, FP Calmon… - arXiv preprint arXiv …, 2020 - arxiv.org
We propose a general, yet simple patch that can be applied to existing regularization-based
continual learning methods called classifier-projection regularization (CPR). Inspired by …

Computationally budgeted continual learning: What does matter?

A Prabhu, HA Al Kader Hammoud… - Proceedings of the …, 2023 - openaccess.thecvf.com
Continual Learning (CL) aims to sequentially train models on streams of incoming data that
vary in distribution by preserving previous knowledge while adapting to new data. Current …

Real-time evaluation in online continual learning: A new hope

Y Ghunaim, A Bibi, K Alhamoud… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Current evaluations of Continual Learning (CL) methods typically assume that there
is no constraint on training time and computation. This is an unrealistic assumption for any …

Efficient feature transformations for discriminative and generative continual learning

VK Verma, KJ Liang, N Mehta… - Proceedings of the …, 2021 - openaccess.thecvf.com
As neural networks are increasingly being applied to real-world applications, mechanisms to
address distributional shift and sequential task learning without forgetting are critical …

Slca: Slow learner with classifier alignment for continual learning on a pre-trained model

G Zhang, L Wang, G Kang… - Proceedings of the …, 2023 - openaccess.thecvf.com
The goal of continual learning is to improve the performance of recognition models in
learning sequentially arrived data. Although most existing works are established on the …

Layerwise optimization by gradient decomposition for continual learning

S Tang, D Chen, J Zhu, S Yu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Deep neural networks achieve state-of-the-art and sometimes super-human performance
across a variety of domains. However, when learning tasks sequentially, the networks easily …

Efficient continual learning with modular networks and task-driven priors

T Veniat, L Denoyer, MA Ranzato - arXiv preprint arXiv:2012.12631, 2020 - arxiv.org
Existing literature in Continual Learning (CL) has focused on overcoming catastrophic
forgetting, the inability of the learner to recall how to perform tasks observed in the past …

Training networks in null space of feature covariance for continual learning

S Wang, X Li, J Sun, Z Xu - … of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
In the setting of continual learning, a network is trained on a sequence of tasks, and suffers
from catastrophic forgetting. To balance plasticity and stability of network in continual …

Continual learning based on ood detection and task masking

G Kim, S Esmaeilpour, C Xiao… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Existing continual learning techniques focus on either task incremental learning (TIL) or
class incremental learning (CIL) problem, but not both. CIL and TIL differ mainly in that the …