Class incremental learning with less forgetting direction and equilibrium point

H Wen, H Qiu, L Wang, H Cheng… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Catastrophic forgetting is the core problem of class incremental learning (CIL). Existing work
mainly adopts memory replay, knowledge distillation, and dynamic architecture to alleviate …

Pass++: A dual bias reduction framework for non-exemplar class-incremental learning

F Zhu, XY Zhang, Z Cheng, CL Liu - arXiv preprint arXiv:2407.14029, 2024 - arxiv.org
Class-incremental learning (CIL) aims to recognize new classes incrementally while
maintaining the discriminability of old classes. Most existing CIL methods are exemplar …

Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning

W Ren, X Li, L Wang, T Zhao, W Qin - arXiv preprint arXiv:2402.18865, 2024 - arxiv.org
Existing research has shown that large language models (LLMs) exhibit remarkable
performance in language understanding and generation. However, when LLMs are …

Towards Non-Exemplar Semi-Supervised Class-Incremental Learning

W Liu, F Zhu, CL Liu - arXiv preprint arXiv:2403.18291, 2024 - arxiv.org
Deep neural networks perform remarkably well in close-world scenarios. However, novel
classes emerged continually in real applications, making it necessary to learn incrementally …

Improving Group Connectivity for Generalization of Federated Deep Learning

Z Li, J Lin, Z Li, D Zhu, R Ye, T Shen, T Lin… - arXiv preprint arXiv …, 2024 - arxiv.org
Federated learning (FL) involves multiple heterogeneous clients collaboratively training a
global model via iterative local updates and model fusion. The generalization of FL's global …

Energy-Efficient and Timeliness-Aware Continual Learning Management System

DK Kang - Energies, 2023 - mdpi.com
Continual learning has recently become a primary paradigm for deep neural network
models in modern artificial intelligence services, where streaming data patterns frequently …

Do Deep Neural Network Solutions Form a Star Domain?

A Sonthalia, A Rubinstein, E Abbasnejad… - arXiv preprint arXiv …, 2024 - arxiv.org
Entezari et al.(2022) conjectured that neural network solution sets reachable via stochastic
gradient descent (SGD) are convex, considering permutation invariances. This means that …

Bias Mitigating Few-Shot Class-Incremental Learning

LJ Zhao, ZD Chen, ZC Zhang, X Luo, XS Xu - arXiv preprint arXiv …, 2024 - arxiv.org
Few-shot class-incremental learning (FSCIL) aims at recognizing novel classes continually
with limited novel class samples. A mainstream baseline for FSCIL is first to train the whole …