EMO: episodic memory optimization for few-shot meta-learning

Y Du, J Shen, X Zhen… - Conference on Lifelong …, 2023 - proceedings.mlr.press
Few-shot meta-learning presents a challenge for gradient descent optimization due to the
limited number of training samples per task. To address this issue, we propose an episodic …

Meta-learning with implicit gradients

A Rajeswaran, C Finn, SM Kakade… - Advances in neural …, 2019 - proceedings.neurips.cc
A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on
prior experience. Gradient (or optimization) based meta-learning has recently emerged as …

SHOT: suppressing the hessian along the optimization trajectory for gradient-based meta-learning

JH Lee, J Yoo, N Kwak - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In this paper, we hypothesize that gradient-based meta-learning (GBML) implicitly
suppresses the Hessian along the optimization trajectory in the inner loop. Based on this …

Task attended meta-learning for few-shot learning

A Aimen, S Sidheekh, NC Krishnan - arXiv preprint arXiv:2106.10642, 2021 - arxiv.org
Meta-learning (ML) has emerged as a promising direction in learning models under
constrained resource settings like few-shot learning. The popular approaches for ML either …

[PDF][PDF] A structured prediction approach for conditional meta-learning

R Wang, Y Demiris, C Ciliberto - Advances in Neural Information …, 2020 - researchgate.net
Optimization-based meta-learning algorithms are a powerful class of methods for learning-to-
learn applications such as few-shot learning. They tackle the limited availability of training …

Learning to forget for meta-learning via task-and-layer-wise attenuation

S Baik, J Oh, S Hong, KM Lee - IEEE Transactions on Pattern …, 2021 - ieeexplore.ieee.org
Few-shot learning is an emerging yet challenging problem in which the goal is to achieve
generalization from only few examples. Meta-learning tackles few-shot learning via the …

Task-Agnostic Meta-Learning for Few-shot Learning

M Abdullah Jamal, GJ Qi, M Shah - arXiv e-prints, 2018 - ui.adsabs.harvard.edu
Meta-learning approaches have been proposed to tackle the few-shot learning problem.
Typically, a meta-learner is trained on a variety of tasks in the hopes of being generalizable …

Meta-AdaM: An meta-learned adaptive optimizer with momentum for few-shot learning

S Sun, H Gao - Advances in Neural Information Processing …, 2024 - proceedings.neurips.cc
Abstract We introduce Meta-AdaM, a meta-learned adaptive optimizer with momentum,
designed for few-shot learning tasks that pose significant challenges to deep learning …

Stress testing of meta-learning approaches for few-shot learning

A Aimen, S Sidheekh, V Madan… - AAAI Workshop on …, 2021 - proceedings.mlr.press
Meta-learning (ML) has emerged as a promising learning method under resource
constraints such as few-shot learning. ML approaches typically propose a methodology to …

Non-greedy gradient-based hyperparameter optimization over long horizons

P Micaelli, A Storkey - 2020 - openreview.net
Gradient-based meta-learning has earned a widespread popularity in few-shot learning, but
remains broadly impractical for tasks with long horizons (many gradient steps), due to …