A comprehensive survey of continual learning: theory, method and application

L Wang, X Zhang, H Su, J Zhu - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
To cope with real-world dynamics, an intelligent system needs to incrementally acquire,
update, accumulate, and exploit knowledge throughout its lifetime. This ability, known as …

An appraisal of incremental learning methods

Y Luo, L Yin, W Bai, K Mao - Entropy, 2020 - mdpi.com
As a special case of machine learning, incremental learning can acquire useful knowledge
from incoming data continuously while it does not need to access the original data. It is …

The rise and potential of large language model based agents: A survey

Z Xi, W Chen, X Guo, W He, Y Ding, B Hong… - arXiv preprint arXiv …, 2023 - arxiv.org
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing
the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are …

Efficient test-time model adaptation without forgetting

S Niu, J Wu, Y Zhang, Y Chen… - International …, 2022 - proceedings.mlr.press
Test-time adaptation provides an effective means of tackling the potential distribution shift
between model training and inference, by dynamically updating the model at test time. This …

Prompt-aligned gradient for prompt tuning

B Zhu, Y Niu, Y Han, Y Wu… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Thanks to the large pre-trained vision-language models (VLMs) like CLIP, we can craft a
zero-shot classifier by discrete prompt design, eg, the confidence score of an image …

Sam-clip: Merging vision foundation models towards semantic and spatial understanding

H Wang, PKA Vasu, F Faghri… - Proceedings of the …, 2024 - openaccess.thecvf.com
The landscape of publicly available vision foundation models (VFMs) such as CLIP and
SAM is expanding rapidly. VFMs are endowed with distinct capabilities stemming from their …

Dytox: Transformers for continual learning with dynamic token expansion

A Douillard, A Ramé, G Couairon… - Proceedings of the …, 2022 - openaccess.thecvf.com
Deep network architectures struggle to continually learn new tasks without forgetting the
previous tasks. A recent trend indicates that dynamic architectures based on an expansion …

S-prompts learning with pre-trained transformers: An occam's razor for domain incremental learning

Y Wang, Z Huang, X Hong - Advances in Neural …, 2022 - proceedings.neurips.cc
State-of-the-art deep neural networks are still struggling to address the catastrophic
forgetting problem in continual learning. In this paper, we propose one simple paradigm …

Online continual learning through mutual information maximization

Y Guo, B Liu, D Zhao - International conference on machine …, 2022 - proceedings.mlr.press
This paper proposed a new online continual learning approach called OCM based on
mutual information (MI) maximization. It achieves two objectives that are critical in dealing …

Gradient surgery for multi-task learning

T Yu, S Kumar, A Gupta, S Levine… - Advances in Neural …, 2020 - proceedings.neurips.cc
While deep learning and deep reinforcement learning (RL) systems have demonstrated
impressive results in domains such as image classification, game playing, and robotic …