Transfer meta-learning: Information-theoretic bounds and information meta-risk minimization

ST Jose, O Simeone, G Durisi - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Meta-learning automatically infers an inductive bias by observing data from a number of
related tasks. The inductive bias is encoded by hyperparameters that determine aspects of …

Information-theoretic generalization bounds for meta-learning and applications

ST Jose, O Simeone - Entropy, 2021 - mdpi.com
Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from
data corresponding to multiple related tasks with the goal of improving the sample efficiency …

Conditional mutual information-based generalization bound for meta learning

A Rezazadeh, ST Jose, G Durisi… - 2021 IEEE International …, 2021 - ieeexplore.ieee.org
Meta-learning optimizes an inductive bias—typically in the form of the hyperparameters of a
base-learning algorithm—by observing data from a finite number of related tasks. This paper …

An information-theoretic analysis of the impact of task similarity on meta-learning

ST Jose, O Simeone - 2021 IEEE International Symposium on …, 2021 - ieeexplore.ieee.org
Meta-learning aims at optimizing the hyperparameters of a model class or training algorithm
from the observation of data from a number of related tasks. Following the setting of Baxter …

A general framework for PAC-Bayes bounds for meta-learning

A Rezazadeh - arXiv preprint arXiv:2206.05454, 2022 - arxiv.org
Meta learning automatically infers an inductive bias, that includes the hyperparameter of the
base-learning algorithm, by observing data from a finite number of related tasks. This paper …

A unified view on pac-bayes bounds for meta-learning

A Rezazadeh - International Conference on Machine …, 2022 - proceedings.mlr.press
Meta learning automatically infers an inductive bias, that includes the hyperparameter of the
baselearning algorithm, by observing data from a finite number of related tasks. This paper …

Meta-learning requires meta-augmentation

J Rajendran, A Irpan, E Jang - Advances in Neural …, 2020 - proceedings.neurips.cc
Meta-learning algorithms aim to learn two components: a model that predicts targets for a
task, and a base learner that updates that model when given examples from a new task. This …

Task-robust model-agnostic meta-learning

L Collins, A Mokhtari… - Advances in Neural …, 2020 - proceedings.neurips.cc
Meta-learning methods have shown an impressive ability to train models that rapidly learn
new tasks. However, these methods only aim to perform well in expectation over tasks …

How important is the train-validation split in meta-learning?

Y Bai, M Chen, P Zhou, T Zhao, J Lee… - International …, 2021 - proceedings.mlr.press
Meta-learning aims to perform fast adaptation on a new task through learning a “prior” from
multiple existing tasks. A common practice in meta-learning is to perform a train-validation …

Meta-learning without memorization

M Yin, G Tucker, M Zhou, S Levine, C Finn - arXiv preprint arXiv …, 2019 - arxiv.org
The ability to learn new concepts with small amounts of data is a critical aspect of
intelligence that has proven challenging for deep learning methods. Meta-learning has …