Multimodality in meta-learning: A comprehensive survey

Y Ma, S Zhao, W Wang, Y Li, I King - Knowledge-Based Systems, 2022 - Elsevier
Meta-learning has gained wide popularity as a training framework that is more data-efficient
than traditional machine learning methods. However, its generalization ability in complex …

Bayesian model-agnostic meta-learning

J Yoon, T Kim, O Dia, S Kim… - Advances in neural …, 2018 - proceedings.neurips.cc
Due to the inherent model uncertainty, learning to infer Bayesian posterior from a few-shot
dataset is an important step towards robust meta-learning. In this paper, we propose a novel …

C-mixup: Improving generalization in regression

H Yao, Y Wang, L Zhang, JY Zou… - Advances in neural …, 2022 - proceedings.neurips.cc
Improving the generalization of deep networks is an important open challenge, particularly
in domains without plentiful data. The mixup algorithm improves generalization by linearly …

Meta-learning without memorization

M Yin, G Tucker, M Zhou, S Levine, C Finn - arXiv preprint arXiv …, 2019 - arxiv.org
The ability to learn new concepts with small amounts of data is a critical aspect of
intelligence that has proven challenging for deep learning methods. Meta-learning has …

Meta-learning requires meta-augmentation

J Rajendran, A Irpan, E Jang - Advances in Neural …, 2020 - proceedings.neurips.cc
Meta-learning algorithms aim to learn two components: a model that predicts targets for a
task, and a base learner that updates that model when given examples from a new task. This …

Meta-learning with fewer tasks through task interpolation

H Yao, L Zhang, C Finn - arXiv preprint arXiv:2106.02695, 2021 - arxiv.org
Meta-learning enables algorithms to quickly learn a newly encountered task with just a few
labeled examples by transferring previously learned knowledge. However, the bottleneck of …

Generalization of model-agnostic meta-learning algorithms: Recurring and unseen tasks

A Fallah, A Mokhtari… - Advances in Neural …, 2021 - proceedings.neurips.cc
In this paper, we study the generalization properties of Model-Agnostic Meta-Learning
(MAML) algorithms for supervised learning problems. We focus on the setting in which we …

When maml can adapt fast and how to assist when it cannot

S Arnold, S Iqbal, F Sha - International conference on …, 2021 - proceedings.mlr.press
Abstract Model-Agnostic Meta-Learning (MAML) and its variants have achieved success in
meta-learning tasks on many datasets and settings. Nonetheless, we have just started to …

Wide-minima density hypothesis and the explore-exploit learning rate schedule

N Iyer, V Thejas, N Kwatra, R Ramjee… - Journal of Machine …, 2023 - jmlr.org
Several papers argue that wide minima generalize better than narrow minima. In this paper,
through detailed experiments that not only corroborate the generalization properties of wide …

Improving Generalization in Meta-Learning via Meta-Gradient Augmentation

R Wang, H Sun, Q Wei, X Nie, Y Ma, Y Yin - arXiv preprint arXiv …, 2023 - arxiv.org
Meta-learning methods typically follow a two-loop framework, where each loop potentially
suffers from notorious overfitting, hindering rapid adaptation and generalization to new …