How to train your MAML to excel in few-shot classification

HJ Ye, WL Chao - arXiv preprint arXiv:2106.16245, 2021 - arxiv.org
Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning
algorithms nowadays. Nevertheless, its performance on few-shot classification is far behind …

Distributed Machine Learning in Edge Computing: Challenges, Solutions and Future Directions

J Tu, L Yang, J Cao - ACM Computing Surveys, 2024 - dl.acm.org
Distributed machine learning on edges is widely used in intelligent transportation, smart
home, industrial manufacturing, and underground pipe network monitoring to achieve low …

Learning large-scale neural fields via context pruned meta-learning

J Tack, S Kim, S Yu, J Lee, J Shin… - Advances in Neural …, 2024 - proceedings.neurips.cc
We introduce an efficient optimization-based meta-learning technique for large-scale neural
field training by realizing significant memory savings through automated online context point …

Meta-learning with self-improving momentum target

J Tack, J Park, H Lee, J Lee… - Advances in Neural …, 2022 - proceedings.neurips.cc
The idea of using a separately trained target model (or teacher) to improve the performance
of the student model has been increasingly popular in various machine learning domains …

Memory efficient meta-learning with large images

J Bronskill, D Massiceti, M Patacchiola… - Advances in neural …, 2021 - proceedings.neurips.cc
Meta learning approaches to few-shot classification are computationally efficient at test time,
requiring just a few optimization steps or single forward pass to learn a new task, but they …

When meta-learning meets online and continual learning: A survey

J Son, S Lee, G Kim - IEEE Transactions on Pattern Analysis …, 2024 - ieeexplore.ieee.org
Over the past decade, deep neural networks have demonstrated significant success using
the training scheme that involves mini-batch stochastic gradient descent on extensive …

Adversarial gradient-based meta learning with metric-based test

Y Zhang, C Wang, Q Shi, Y Feng, C Chen - Knowledge-Based Systems, 2023 - Elsevier
The gradient-based meta learning and its approximation algorithms have been widely used
in the few-shot scenarios. In practice, it is common for the trained meta-model to employ …

Sequential reptile: Inter-task gradient alignment for multilingual learning

S Lee, HB Lee, J Lee, SJ Hwang - arXiv preprint arXiv:2110.02600, 2021 - arxiv.org
Multilingual models jointly pretrained on multiple languages have achieved remarkable
performance on various multilingual downstream tasks. Moreover, models finetuned on a …

Leveraging Task Variability in Meta-learning

A Aimen, B Ladrecha, S Sidheekh, NC Krishnan - SN Computer Science, 2023 - Springer
Meta-learning (ML) utilizes extracted meta-knowledge from data to enable models to
perform well on unseen data that they have not encountered before. Typically, this meta …

On first-order meta-reinforcement learning with moreau envelopes

MT Toghani, S Perez-Salazar… - 2023 62nd IEEE …, 2023 - ieeexplore.ieee.org
Meta-Reinforcement Learning (MRL) is a promising framework for training agents that can
quickly adapt to new environments and tasks. In this work, we study the MRL problem under …