作者
Jifei Song, Yongxin Yang, Yi-Zhe Song, Tao Xiang, Timothy M Hospedales
发表日期
2019
研讨会论文
Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition
页码范围
719-728
简介
We aim to learn a domain generalizable person re-identification (ReID) model. When such a model is trained on a set of source domains (ReID datasets collected from different camera networks), it can be directly applied to any new unseen dataset for effective ReID without any model updating. Despite its practical value in real-world deployments, generalizable ReID has seldom been studied. In this work, a novel deep ReID model termed Domain-Invariant Mapping Network (DIMN) is proposed. DIMN is designed to learn a mapping between a person image and its identity classifier, ie, it produces a classifier using a single shot. To make the model domain-invariant, we follow a meta-learning pipeline and sample a subset of source domain training tasks during each training episode. However, the model is significantly different from conventional meta-learning methods in that:(1) no model updating is required for the target domain,(2) different training tasks share a memory bank for maintaining both scalability and discrimination ability, and (3) it can be used to match an arbitrary number of identities in a target domain. Extensive experiments on a newly proposed large-scale ReID domain generalization benchmark show that our DIMN significantly outperforms alternative domain generalization or meta-learning methods.
引用总数
20192020202120222023202464047706027
学术搜索中的文章
J Song, Y Yang, YZ Song, T Xiang, TM Hospedales - Proceedings of the IEEE/CVF conference on Computer …, 2019