Spatio-temporal representation factorization for video-based person re-identification

A Aich, M Zheng, S Karanam, T Chen… - Proceedings of the …, 2021 - openaccess.thecvf.com
Proceedings of the IEEE/CVF international conference on …, 2021openaccess.thecvf.com
Despite much recent progress in video-based person re-identification (re-ID), the current
state-of-the-art still suffers from common real-world challenges such as appearance
similarity among various people, occlusions, and frame misalignment. To alleviate these
problems, we propose Spatio-Temporal Representation Factorization (STRF), a flexible new
computational unit that can be used in conjunction with most existing 3D convolutional
neural network architectures for re-ID. The key innovations of STRF over prior work include …
Abstract
Despite much recent progress in video-based person re-identification (re-ID), the current state-of-the-art still suffers from common real-world challenges such as appearance similarity among various people, occlusions, and frame misalignment. To alleviate these problems, we propose Spatio-Temporal Representation Factorization (STRF), a flexible new computational unit that can be used in conjunction with most existing 3D convolutional neural network architectures for re-ID. The key innovations of STRF over prior work include explicit pathways for learning discriminative temporal and spatial features, with each component further factorized to capture complementary person-specific appearance and motion information. Specifically, temporal factorization comprises two branches, one each for static features (eg, the color of clothes) that do not change much over time, and dynamic features (eg, walking patterns) that change over time. Further, spatial factorization also comprises two branches to learn both global (coarse segments) as well as local (finer segments) appearance features, with the local features particularly useful in cases of occlusion or spatial misalignment. These two factorization operations taken together result in a modular architecture for our parameter-wise light STRF unit that can be plugged in between any two 3D convolutional layers, resulting in an end-to-end learning framework. We empirically show that STRF improves performance of various existing baseline architectures while demonstrating new state-of-the-art results using standard person re-ID evaluation protocols on three benchmarks.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果