Continual learning for recurrent neural networks: an empirical evaluation

A Cossu, A Carta, V Lomonaco, D Bacciu - Neural Networks, 2021 - Elsevier
Learning continuously during all model lifetime is fundamental to deploy machine learning
solutions robust to drifts in the data distribution. Advances in Continual Learning (CL) with …

[HTML][HTML] Resilience and resilient systems of artificial intelligence: taxonomy, models and methods

V Moskalenko, V Kharchenko, A Moskalenko… - Algorithms, 2023 - mdpi.com
Artificial intelligence systems are increasingly being used in industrial applications, security
and military contexts, disaster response complexes, policing and justice practices, finance …

Gcr: Gradient coreset based replay buffer selection for continual learning

R Tiwari, K Killamsetty, R Iyer… - Proceedings of the …, 2022 - openaccess.thecvf.com
Continual learning (CL) aims to develop techniques by which a single model adapts to an
increasing number of tasks encountered sequentially, thereby potentially leveraging …

Sparcl: Sparse continual learning on the edge

Z Wang, Z Zhan, Y Gong, G Yuan… - Advances in …, 2022 - proceedings.neurips.cc
Existing work in continual learning (CL) focuses on mitigating catastrophic forgetting, ie,
model performance deterioration on past tasks when learning a new task. However, the …

Powerpropagation: A sparsity inducing weight reparameterisation

J Schwarz, S Jayakumar, R Pascanu… - Advances in neural …, 2021 - proceedings.neurips.cc
The training of sparse neural networks is becoming an increasingly important tool for
reducing the computational footprint of models at training and evaluation, as well enabling …

Nispa: Neuro-inspired stability-plasticity adaptation for continual learning in sparse networks

MB Gurbuz, C Dovrolis - arXiv preprint arXiv:2206.09117, 2022 - arxiv.org
The goal of continual learning (CL) is to learn different tasks over time. The main desiderata
associated with CL are to maintain performance on older tasks, leverage the latter to …

Deep ensembling with no overhead for either training or testing: The all-round blessings of dynamic sparsity

S Liu, T Chen, Z Atashgahi, X Chen, G Sokar… - arXiv preprint arXiv …, 2021 - arxiv.org
The success of deep ensembles on improving predictive performance, uncertainty
estimation, and out-of-distribution robustness has been extensively studied in the machine …

Parameter-level soft-masking for continual learning

T Konishi, M Kurokawa, C Ono, Z Ke… - International …, 2023 - proceedings.mlr.press
Existing research on task incremental learning in continual learning has primarily focused
on preventing catastrophic forgetting (CF). Although several techniques have achieved …

Where to pay attention in sparse training for feature selection?

G Sokar, Z Atashgahi, M Pechenizkiy… - Advances in Neural …, 2022 - proceedings.neurips.cc
A new line of research for feature selection based on neural networks has recently emerged.
Despite its superiority to classical methods, it requires many training iterations to converge …

Class-incremental experience replay for continual learning under concept drift

L Korycki, B Krawczyk - … of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Modern machine learning systems need to be able to cope with constantly arriving and
changing data. Two main areas of research dealing with such scenarios are continual …