IAN: the individual aggregation network for person search

J Xiao, Y Xie, T Tillo, K Huang, Y Wei, J Feng - Pattern Recognition, 2019 - Elsevier
Pattern Recognition, 2019Elsevier
Person search in real-world scenarios is a new challenging computer version task with
many meaningful applications. The challenge of this task mainly comes from:(1) unavailable
bounding boxes for pedestrians and the model needs to search for the person over the
whole gallery images;(2) huge variance of visual appearance of a particular person owing to
varying poses, lighting conditions, and occlusions. To address these two critical issues in
modern person search applications, we propose a novel Individual Aggregation Network …
Abstract
Person search in real-world scenarios is a new challenging computer version task with many meaningful applications. The challenge of this task mainly comes from: (1) unavailable bounding boxes for pedestrians and the model needs to search for the person over the whole gallery images; (2) huge variance of visual appearance of a particular person owing to varying poses, lighting conditions, and occlusions. To address these two critical issues in modern person search applications, we propose a novel Individual Aggregation Network (IAN) that can accurately localize persons by learning to minimize intra-person feature variations. IAN is built upon the state-of-the-art object detection framework, i.e., faster R-CNN, so that high-quality region proposals for pedestrians can be produced in an online manner. In addition, to relieve the negative effect caused by varying visual appearances of the same individual, IAN introduces a novel center loss that can increase the intra-class compactness of feature representations. The engaged center loss encourages persons with the same identity to have similar feature characteristics. Extensive experimental results on two benchmarks, i.e., CUHK-SYSU and PRW, well demonstrate the superiority of the proposed model. In particular, IAN achieves 77.23% mAP and 80.45% top-1 accuracy on CUHK-SYSU, which outperform the state-of-the-art by 1.7% and 1.85%, respectively.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果