Few is enough: task-augmented active meta-learning for brain cell classification

P Yuan, A Mobiny, J Jahanipour, X Li… - … Image Computing and …, 2020 - Springer
Abstract Deep Neural Networks (or DNNs) must constantly cope with distribution changes in
the input data when the task of interest or the data collection protocol changes. Retraining a …

Meta-Learning Loss Functions for Deep Neural Networks

C Raymond - arXiv preprint arXiv:2406.09713, 2024 - arxiv.org
Humans can often quickly and efficiently solve complex new learning tasks given only a
small set of examples. In contrast, modern artificially intelligent systems often require …

Exploring meta learning: parameterizing the learning-to-learn process for image classification

C So - 2021 International Conference on Artificial Intelligence …, 2021 - ieeexplore.ieee.org
Meta-learning has emerged as a new paradigm in AI to challenge the limitation of
conventional deep learning to acquire only task-specific knowledge. Meta-learning …

Meta-learning with an adaptive task scheduler

H Yao, Y Wang, Y Wei, P Zhao… - Advances in …, 2021 - proceedings.neurips.cc
To benefit the learning of a new task, meta-learning has been proposed to transfer a well-
generalized meta-model learned from various meta-training tasks. Existing meta-learning …

Meta-learning without data via wasserstein distributionally-robust model fusion

Z Wang, X Wang, L Shen, Q Suo… - Uncertainty in …, 2022 - proceedings.mlr.press
Existing meta-learning works assume that each task has available training and testing data.
However, there are many available pre-trained models without accessing their training data …

[HTML][HTML] Understanding transfer learning and gradient-based meta-learning techniques

M Huisman, A Plaat, JN van Rijn - Machine Learning, 2023 - Springer
Deep neural networks can yield good performance on various tasks but often require large
amounts of data to train them. Meta-learning received considerable attention as one …

Robust MAML: Prioritization task buffer with adaptive learning process for model-agnostic meta-learning

T Nguyen, T Luu, T Pham… - ICASSP 2021-2021 …, 2021 - ieeexplore.ieee.org
Model agnostic meta-learning (MAML) is a popular state-of-the-art meta-learning algorithm
that provides good weight initialization of a model given a variety of learning tasks. The …

On modulating the gradient for meta-learning

C Simon, P Koniusz, R Nock, M Harandi - Computer Vision–ECCV 2020 …, 2020 - Springer
Inspired by optimization techniques, we propose a novel meta-learning algorithm with
gradient modulation to encourage fast-adaptation of neural networks in the absence of …

Learning to learn by jointly optimizing neural architecture and weights

Y Ding, Y Wu, C Huang, S Tang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Meta-learning enables models to adapt to new environments rapidly with a few training
examples. Current gradient-based meta-learning methods concentrate on finding good …

Towards Task Sampler Learning for Meta-Learning

J Wang, W Qiang, X Su, C Zheng, F Sun… - International Journal of …, 2024 - Springer
Meta-learning aims to learn general knowledge with diverse training tasks conducted from
limited data, and then transfer it to new tasks. It is commonly believed that increasing task …