from large data sources to new tasks. In few-class, few-shot target task settings (ie when
there are only a few classes and training examples available in the target task), meta-
learning approaches that optimize for future task learning have outperformed the typical
transfer approach of initializing model weights from a pretrained starting point. But as we
experimentally show, metalearning algorithms that work well in the few-class setting do not …