A continual learning survey: Defying forgetting in classification tasks

M De Lange, R Aljundi, M Masana… - IEEE transactions on …, 2021 - ieeexplore.ieee.org
Artificial neural networks thrive in solving the classification problem for a particular rigid task,
acquiring knowledge through generalized learning behaviour from a distinct training phase …

Online meta-learning

C Finn, A Rajeswaran, S Kakade… - … on machine learning, 2019 - proceedings.mlr.press
A central capability of intelligent systems is the ability to continuously build upon previous
experiences to speed up and enhance learning of new tasks. Two distinct research …

Few-shot learning via learning the representation, provably

SS Du, W Hu, SM Kakade, JD Lee, Q Lei - arXiv preprint arXiv:2002.09434, 2020 - arxiv.org
This paper studies few-shot learning via representation learning, where one uses $ T $
source tasks with $ n_1 $ data per task to learn a representation in order to reduce the …

Adaptive gradient-based meta-learning methods

M Khodak, MFF Balcan… - Advances in Neural …, 2019 - proceedings.neurips.cc
We build a theoretical framework for designing and understanding practical meta-learning
methods that integrates sophisticated formalizations of task-similarity with the extensive …

On the convergence theory of gradient-based model-agnostic meta-learning algorithms

A Fallah, A Mokhtari… - … Conference on Artificial …, 2020 - proceedings.mlr.press
We study the convergence of a class of gradient-based Model-Agnostic Meta-Learning
(MAML) methods and characterize their overall complexity as well as their best achievable …

Meta-learning by adjusting priors based on extended PAC-Bayes theory

R Amit, R Meir - International Conference on Machine …, 2018 - proceedings.mlr.press
In meta-learning an agent extracts knowledge from observed tasks, aiming to facilitate
learning of novel future tasks. Under the assumption that future tasks are 'related'to previous …

Provable guarantees for gradient-based meta-learning

MF Balcan, M Khodak… - … Conference on Machine …, 2019 - proceedings.mlr.press
We study the problem of meta-learning through the lens of online convex optimization,
developing a meta-algorithm bridging the gap between popular gradient-based meta …

How important is the train-validation split in meta-learning?

Y Bai, M Chen, P Zhou, T Zhao, J Lee… - International …, 2021 - proceedings.mlr.press
Meta-learning aims to perform fast adaptation on a new task through learning a “prior” from
multiple existing tasks. A common practice in meta-learning is to perform a train-validation …

Learning-to-learn stochastic gradient descent with biased regularization

G Denevi, C Ciliberto, R Grazzi… - … on Machine Learning, 2019 - proceedings.mlr.press
We study the problem of learning-to-learn: infer-ring a learning algorithm that works well on
a family of tasks sampled from an unknown distribution. As class of algorithms we consider …

Efficient meta learning via minibatch proximal update

P Zhou, X Yuan, H Xu, S Yan… - Advances in Neural …, 2019 - proceedings.neurips.cc
We address the problem of meta-learning which learns a prior over hypothesis from a
sample of meta-training tasks for fast adaptation on meta-testing tasks. A particularly simple …