Gradient-based meta-learning has earned a widespread popularity in few-shot learning, but remains broadly impractical for tasks with long horizons (many gradient steps), due to …
A core capability of intelligent systems is the ability to quickly learn new tasks by drawing on prior experience. Gradient (or optimization) based meta-learning has recently emerged as …
Y Du, J Shen, X Zhen… - Conference on Lifelong …, 2023 - proceedings.mlr.press
Few-shot meta-learning presents a challenge for gradient descent optimization due to the limited number of training samples per task. To address this issue, we propose an episodic …
Meta-learning methods typically follow a two-loop framework, where each loop potentially suffers from notorious overfitting, hindering rapid adaptation and generalization to new …
Optimization-based meta-learning algorithms are a powerful class of methods for learning-to- learn applications such as few-shot learning. They tackle the limited availability of training …
JH Lee, J Yoo, N Kwak - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In this paper, we hypothesize that gradient-based meta-learning (GBML) implicitly suppresses the Hessian along the optimization trajectory in the inner loop. Based on this …
Abstract Model-agnostic meta learning (MAML) is currently one of the dominating approaches for few-shot meta-learning. Albeit its effectiveness, the optimization of MAML …
Meta-learning of shared initialization parameters has shown to be highly effective in solving few-shot learning tasks. However, extending the framework to many-shot scenarios, which …
S Sun, H Gao - Advances in Neural Information Processing …, 2024 - proceedings.neurips.cc
Abstract We introduce Meta-AdaM, a meta-learned adaptive optimizer with momentum, designed for few-shot learning tasks that pose significant challenges to deep learning …