How important is the train-validation split in meta-learning?

Y Bai, M Chen, P Zhou, T Zhao, J Lee… - International …, 2021 - proceedings.mlr.press
Meta-learning aims to perform fast adaptation on a new task through learning a “prior” from
multiple existing tasks. A common practice in meta-learning is to perform a train-validation …

Towards sample-efficient overparameterized meta-learning

Y Sun, A Narang, I Gulluk… - Advances in Neural …, 2021 - proceedings.neurips.cc
An overarching goal in machine learning is to build a generalizable model with few samples.
To this end, overparameterization has been the subject of immense interest to explain the …

A representation learning perspective on the importance of train-validation splitting in meta-learning

N Saunshi, A Gupta, W Hu - International Conference on …, 2021 - proceedings.mlr.press
An effective approach in meta-learning is to utilize multiple “train tasks” to learn a good
initialization for model parameters that can help solve unseen “test tasks” with very few …

Convergence of meta-learning with task-specific adaptation over partial parameters

K Ji, JD Lee, Y Liang, HV Poor - Advances in Neural …, 2020 - proceedings.neurips.cc
Although model-agnostic meta-learning (MAML) is a very successful algorithm in meta-
learning practice, it can have high computational cost because it updates all model …

Efficient meta learning via minibatch proximal update

P Zhou, X Yuan, H Xu, S Yan… - Advances in Neural …, 2019 - proceedings.neurips.cc
We address the problem of meta-learning which learns a prior over hypothesis from a
sample of meta-training tasks for fast adaptation on meta-testing tasks. A particularly simple …

Task-robust model-agnostic meta-learning

L Collins, A Mokhtari… - Advances in Neural …, 2020 - proceedings.neurips.cc
Meta-learning methods have shown an impressive ability to train models that rapidly learn
new tasks. However, these methods only aim to perform well in expectation over tasks …

Meta-learning without memorization

M Yin, G Tucker, M Zhou, S Levine, C Finn - arXiv preprint arXiv …, 2019 - arxiv.org
The ability to learn new concepts with small amounts of data is a critical aspect of
intelligence that has proven challenging for deep learning methods. Meta-learning has …

Meta-learning requires meta-augmentation

J Rajendran, A Irpan, E Jang - Advances in Neural …, 2020 - proceedings.neurips.cc
Meta-learning algorithms aim to learn two components: a model that predicts targets for a
task, and a base learner that updates that model when given examples from a new task. This …

Theoretical convergence of multi-step model-agnostic meta-learning

K Ji, J Yang, Y Liang - Journal of machine learning research, 2022 - jmlr.org
As a popular meta-learning approach, the model-agnostic meta-learning (MAML) algorithm
has been widely used due to its simplicity and effectiveness. However, the convergence of …

Regularizing meta-learning via gradient dropout

HY Tseng, YW Chen, YH Tsai, S Liu… - Proceedings of the …, 2020 - openaccess.thecvf.com
With the growing attention on learning-to-learn new tasks using only a few examples, meta-
learning has been widely used in numerous problems such as few-shot classification …