Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges

T Lesort, V Lomonaco, A Stoian, D Maltoni, D Filliat… - Information fusion, 2020 - Elsevier
Continual learning (CL) is a particular machine learning paradigm where the data
distribution and learning objective change through time, or where all the training data and …

Deep learning in neural networks: An overview

J Schmidhuber - Neural networks, 2015 - Elsevier
In recent years, deep artificial neural networks (including recurrent ones) have won
numerous contests in pattern recognition and machine learning. This historical survey …

Transfer learning in deep reinforcement learning: A survey

Z Zhu, K Lin, AK Jain, J Zhou - IEEE Transactions on Pattern …, 2023 - ieeexplore.ieee.org
Reinforcement learning is a learning paradigm for solving sequential decision-making
problems. Recent years have witnessed remarkable progress in reinforcement learning …

Meta-learning in neural networks: A survey

T Hospedales, A Antoniou, P Micaelli… - IEEE transactions on …, 2021 - ieeexplore.ieee.org
The field of meta-learning, or learning-to-learn, has seen a dramatic rise in interest in recent
years. Contrary to conventional approaches to AI where tasks are solved from scratch using …

Learning to learn without forgetting by maximizing transfer and minimizing interference

M Riemer, I Cases, R Ajemian, M Liu, I Rish… - arXiv preprint arXiv …, 2018 - arxiv.org
Lack of performance when it comes to continual learning over non-stationary distributions of
data remains a major challenge in scaling neural network learning to more human realistic …

A definition of continual reinforcement learning

D Abel, A Barreto, B Van Roy… - Advances in …, 2024 - proceedings.neurips.cc
In a standard view of the reinforcement learning problem, an agent's goal is to efficiently
identify a policy that maximizes long-term reward. However, this perspective is based on a …

Progress & compress: A scalable framework for continual learning

J Schwarz, W Czarnecki, J Luketina… - International …, 2018 - proceedings.mlr.press
We introduce a conceptually simple and scalable framework for continual learning domains
where tasks are learned sequentially. Our method is constant in the number of parameters …

Gradient episodic memory for continual learning

D Lopez-Paz, MA Ranzato - Advances in neural information …, 2017 - proceedings.neurips.cc
One major obstacle towards AI is the poor ability of models to solve new problems quicker,
and without forgetting previously acquired knowledge. To better understand this issue, we …

Pathnet: Evolution channels gradient descent in super neural networks

C Fernando, D Banarse, C Blundell, Y Zwols… - arXiv preprint arXiv …, 2017 - arxiv.org
For artificial general intelligence (AGI) it would be efficient if multiple users trained the same
giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is …

Universal value function approximators

T Schaul, D Horgan, K Gregor… - … conference on machine …, 2015 - proceedings.mlr.press
Value functions are a core component of reinforcement learning. The main idea is to to
construct a single function approximator V (s; theta) that estimates the long-term reward from …