作者
Ameya Prabhu, Philip HS Torr, Puneet K Dokania
发表日期
2020/8/23
研讨会论文
European conference on computer vision
页码范围
524-540
出版商
Springer, Cham
简介
We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the samples it receives, is to continually upgrade its knowledge about the old classes and learn new ones. Our formulation takes inspiration from the open-set recognition problem where test scenarios do not necessarily belong to the training distribution. We also discuss various quirks and assumptions encoded in recently proposed approaches for CL. We argue that some oversimplify the problem to an extent that leaves it with very little practical importance, and makes it extremely easy to perform well on. To validate this, we propose GDumb that (1) greedily stores samples in memory as they come and; (2) at test time, trains a model from scratch using samples only in the memory. We show that even though …
引用总数
学术搜索中的文章
A Prabhu, PHS Torr, PK Dokania - Computer Vision–ECCV 2020: 16th European …, 2020