Learning to prompt for continual learning

Z Wang, Z Zhang, CY Lee, H Zhang… - Proceedings of the …, 2022 - openaccess.thecvf.com
The mainstream paradigm behind continual learning has been to adapt the model
parameters to non-stationary data distributions, where catastrophic forgetting is the central …

Generating instance-level prompts for rehearsal-free continual learning

D Jung, D Han, J Bang, H Song - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract We introduce Domain-Adaptive Prompt (DAP), a novel method for continual
learning using Vision Transformers (ViT). Prompt-based continual learning has recently …

Dualprompt: Complementary prompting for rehearsal-free continual learning

Z Wang, Z Zhang, S Ebrahimi, R Sun, H Zhang… - … on Computer Vision, 2022 - Springer
Continual learning aims to enable a single model to learn a sequence of tasks without
catastrophic forgetting. Top-performing methods usually require a rehearsal buffer to store …

Introducing language guidance in prompt-based continual learning

MGZA Khan, MF Naeem, L Van Gool… - Proceedings of the …, 2023 - openaccess.thecvf.com
Continual Learning aims to learn a single model on a sequence of tasks without having
access to data from previous tasks. The biggest challenge in the domain still remains …

A closer look at rehearsal-free continual learning

JS Smith, J Tian, S Halbe, YC Hsu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Continual learning is a setting where machine learning models learn novel concepts from
continuously shifting training data, while simultaneously avoiding degradation of knowledge …

Gcr: Gradient coreset based replay buffer selection for continual learning

R Tiwari, K Killamsetty, R Iyer… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …

Representational continuity for unsupervised continual learning

D Madaan, J Yoon, Y Li, Y Liu, SJ Hwang - arXiv preprint arXiv …, 2021 - arxiv.org
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously
acquired knowledge. However, recent CL advances are restricted to supervised continual …

Learning bayesian sparse networks with full experience replay for continual learning

Q Yan, D Gong, Y Liu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual Learning (CL) methods aim to enable machine learning models to learn new
tasks without catastrophic forgetting of those that have been previously mastered. Existing …

Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning

JS Smith, L Karlinsky, V Gutta… - Proceedings of the …, 2023 - openaccess.thecvf.com
Computer vision models suffer from a phenomenon known as catastrophic forgetting when
learning novel concepts from continuously shifting training data. Typical solutions for this …

Slca: Slow learner with classifier alignment for continual learning on a pre-trained model

G Zhang, L Wang, G Kang… - Proceedings of the …, 2023 - openaccess.thecvf.com
The goal of continual learning is to improve the performance of recognition models in
learning sequentially arrived data. Although most existing works are established on the …