Continual learning with recursive gradient optimization

H Liu, H Liu - arXiv preprint arXiv:2201.12522, 2022 - arxiv.org
Learning multiple tasks sequentially without forgetting previous knowledge, called Continual
Learning (CL), remains a long-standing challenge for neural networks. Most existing …

Transfer without forgetting

M Boschini, L Bonicelli, A Porrello, G Bellitto… - … on Computer Vision, 2022 - Springer
This work investigates the entanglement between Continual Learning (CL) and Transfer
Learning (TL). In particular, we shed light on the widespread application of network …

Ordisco: Effective and efficient usage of incremental unlabeled data for semi-supervised continual learning

L Wang, K Yang, C Li, L Hong… - Proceedings of the …, 2021 - openaccess.thecvf.com
Continual learning usually assumes the incoming data are fully labeled, which might not be
applicable in real applications. In this work, we consider semi-supervised continual learning …

Class-incremental continual learning into the extended der-verse

M Boschini, L Bonicelli, P Buzzega… - IEEE transactions on …, 2022 - ieeexplore.ieee.org
The staple of human intelligence is the capability of acquiring knowledge in a continuous
fashion. In stark contrast, Deep Networks forget catastrophically and, for this reason, the sub …

Learning bayesian sparse networks with full experience replay for continual learning

Q Yan, D Gong, Y Liu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual Learning (CL) methods aim to enable machine learning models to learn new
tasks without catastrophic forgetting of those that have been previously mastered. Existing …

Continual learning in low-rank orthogonal subspaces

A Chaudhry, N Khan, P Dokania… - Advances in Neural …, 2020 - proceedings.neurips.cc
In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the
other, and the goal is to remember all the tasks once the continual learning experience is …

Scalable and order-robust continual learning with additive parameter decomposition

J Yoon, S Kim, E Yang, SJ Hwang - arXiv preprint arXiv:1902.09432, 2019 - arxiv.org
While recent continual learning methods largely alleviate the catastrophic problem on toy-
sized datasets, some issues remain to be tackled to apply them to real-world problem …

Towards robust evaluations of continual learning

S Farquhar, Y Gal - arXiv preprint arXiv:1805.09733, 2018 - arxiv.org
Experiments used in current continual learning research do not faithfully assess
fundamental challenges of learning continually. Instead of assessing performance on …

Generating instance-level prompts for rehearsal-free continual learning

D Jung, D Han, J Bang, H Song - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract We introduce Domain-Adaptive Prompt (DAP), a novel method for continual
learning using Vision Transformers (ViT). Prompt-based continual learning has recently …

Improving task-free continual learning by distributionally robust memory evolution

Z Wang, L Shen, L Fang, Q Suo… - … on machine learning, 2022 - proceedings.mlr.press
Task-free continual learning (CL) aims to learn a non-stationary data stream without explicit
task definitions and not forget previous knowledge. The widely adopted memory replay …