Task-robust model-agnostic meta-learning

L Collins, A Mokhtari… - Advances in Neural …, 2020 - proceedings.neurips.cc
Meta-learning methods have shown an impressive ability to train models that rapidly learn
new tasks. However, these methods only aim to perform well in expectation over tasks …

Meta-learning without memorization

M Yin, G Tucker, M Zhou, S Levine, C Finn - arXiv preprint arXiv …, 2019 - arxiv.org
The ability to learn new concepts with small amounts of data is a critical aspect of
intelligence that has proven challenging for deep learning methods. Meta-learning has …

Provable generalization of overparameterized meta-learning trained with sgd

Y Huang, Y Liang, L Huang - Advances in Neural …, 2022 - proceedings.neurips.cc
Despite the empirical success of deep meta-learning, theoretical understanding of
overparameterized meta-learning is still limited. This paper studies the generalization of a …

Meta-learning requires meta-augmentation

J Rajendran, A Irpan, E Jang - Advances in Neural …, 2020 - proceedings.neurips.cc
Meta-learning algorithms aim to learn two components: a model that predicts targets for a
task, and a base learner that updates that model when given examples from a new task. This …

Structured prediction for conditional meta-learning

R Wang, Y Demiris, C Ciliberto - Advances in Neural …, 2020 - proceedings.neurips.cc
The goal of optimization-based meta-learning is to find a single initialization shared across a
distribution of tasks to speed up the process of learning new tasks. Conditional meta …

Generalization of model-agnostic meta-learning algorithms: Recurring and unseen tasks

A Fallah, A Mokhtari… - Advances in Neural …, 2021 - proceedings.neurips.cc
In this paper, we study the generalization properties of Model-Agnostic Meta-Learning
(MAML) algorithms for supervised learning problems. We focus on the setting in which we …

How important is the train-validation split in meta-learning?

Y Bai, M Chen, P Zhou, T Zhao, J Lee… - International …, 2021 - proceedings.mlr.press
Meta-learning aims to perform fast adaptation on a new task through learning a “prior” from
multiple existing tasks. A common practice in meta-learning is to perform a train-validation …

Towards well-generalizing meta-learning via adversarial task augmentation

H Wang, H Mai, Y Gong, ZH Deng - Artificial Intelligence, 2023 - Elsevier
Meta-learning aims to use the knowledge from previous tasks to facilitate the learning of
novel tasks. Many meta-learning models elaborately design various task-shared inductive …

When maml can adapt fast and how to assist when it cannot

S Arnold, S Iqbal, F Sha - International conference on …, 2021 - proceedings.mlr.press
Abstract Model-Agnostic Meta-Learning (MAML) and its variants have achieved success in
meta-learning tasks on many datasets and settings. Nonetheless, we have just started to …

Improving generalization in meta-learning via task augmentation

H Yao, LK Huang, L Zhang, Y Wei… - International …, 2021 - proceedings.mlr.press
Meta-learning has proven to be a powerful paradigm for transferring the knowledge from
previous tasks to facilitate the learning of a novel task. Current dominant algorithms train a …