Exemplar-free continual representation learning via learnable drift compensation

A Gomez-Villa, D Goswami, K Wang… - … on Computer Vision, 2025 - Springer
Exemplar-free class-incremental learning using a backbone trained from scratch and
starting from a small first task presents a significant challenge for continual representation …

Category adaptation meets projected distillation in generalized continual category discovery

G Rypeść, D Marczak, S Cygert, T Trzciński… - … on Computer Vision, 2025 - Springer
Abstract Generalized Continual Category Discovery (GCCD) tackles learning from
sequentially arriving, partially labeled datasets while uncovering new categories. Traditional …

CLIP-guided continual novel class discovery

Q Yan, Y Yang, Y Dai, X Zhang, K Wiltos… - Knowledge-Based …, 2024 - Elsevier
Abstract Continual Novel Class Discovery (CNCD) aims to adapt a trained classification
model to a new task while maintaining its performance on the old task. This presents two …

Adaptive knowledge transfer for class incremental learning

Z Feng, M Zhou, Z Gao, A Stefanidis, J Su… - Pattern Recognition …, 2024 - Elsevier
Humans are excellent at adapting to constantly changing circumstances, but deep neural
networks have catastrophic forgetting. Recently, significant progress has been made with …

Integrating Present and Past in Unsupervised Continual Learning

Y Zhang, L Charlin, R Zemel, M Ren - arXiv preprint arXiv:2404.19132, 2024 - arxiv.org
We formulate a unifying framework for unsupervised continual learning (UCL), which
disentangles learning objectives that are specific to the present and the past data …

InfoUCL: Learning Informative Representations for Unsupervised Continual Learning

L Zhang, J Zhao, Q Wu, L Pan… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Unsupervised continual learning (UCL) has made remarkable progress over the past two
years, significantly expanding the application of continual learning (CL). However, existing …

Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks

H Lee, H Cho, H Kim, D Kim, D Min, J Choo… - arXiv preprint arXiv …, 2024 - arxiv.org
This study investigates the loss of generalization ability in neural networks, revisiting warm-
starting experiments from Ash & Adams. Our empirical analysis reveals that common …

Branch-Tuning: Balancing Stability and Plasticity for Continual Self-Supervised Learning

W Liu, F Zhu, CL Liu - arXiv preprint arXiv:2403.18266, 2024 - arxiv.org
Self-supervised learning (SSL) has emerged as an effective paradigm for deriving general
representations from vast amounts of unlabeled data. However, as real-world applications …

Forward-Backward Knowledge Distillation for Continual Clustering

M Sadeghi, Z Wang, N Armanfard - arXiv preprint arXiv:2405.19234, 2024 - arxiv.org
Unsupervised Continual Learning (UCL) is a burgeoning field in machine learning, focusing
on enabling neural networks to sequentially learn tasks without explicit label information …

Bridging Inter-task Gap of Continual Self-supervised Learning with External Data

H Lu, X Cao, F Yang, X Liu - 2024 - openreview.net
Recent research on Self-Supervised Learning (SSL) has demonstrated its ability to extract
high-quality representations from unlabeled samples. However, in continual learning …