Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning

Y Shi, D Shi, Z Qiao, Z Wang, Y Zhang, S Yang, C Qiu - Neural Networks, 2023 - Elsevier
Deep neural networks (DNNs) are prone to the notorious catastrophic forgetting problem
when learning new tasks incrementally. Class-incremental learning (CIL) is a promising …

Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning

Y Shi, D Shi, Z Qiao, Z Wang… - … : the official journal …, 2023 - pubmed.ncbi.nlm.nih.gov
Deep neural networks (DNNs) are prone to the notorious catastrophic forgetting problem
when learning new tasks incrementally. Class-incremental learning (CIL) is a promising …

Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning

Y Shi, D Shi, Z Qiao, Z Wang, Y Zhang, S Yang, C Qiu - 2023 - dl.acm.org
Deep neural networks (DNNs) are prone to the notorious catastrophic forgetting problem
when learning new tasks incrementally. Class-incremental learning (CIL) is a promising …

Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning.

Y Shi, D Shi, Z Qiao, Z Wang, Y Zhang… - Neural Networks: the …, 2023 - europepmc.org
Deep neural networks (DNNs) are prone to the notorious catastrophic forgetting problem
when learning new tasks incrementally. Class-incremental learning (CIL) is a promising …