Memory-attended recurrent network for video captioning

W Pei, J Zhang, X Wang, L Ke… - Proceedings of the …, 2019 - openaccess.thecvf.com
Proceedings of the IEEE/CVF conference on computer vision and …, 2019openaccess.thecvf.com
Typical techniques for video captioning follow the encoder-decoder framework, which can
only focus on one source video being processed. A potential disadvantage of such design is
that it cannot capture the multiple visual context information of a word appearing in more
than one relevant videos in training data. To tackle this limitation, we propose the Memory-
Attended Recurrent Network (MARN) for video captioning, in which a memory structure is
designed to explore the full-spectrum correspondence between a word and its various …
Abstract
Typical techniques for video captioning follow the encoder-decoder framework, which can only focus on one source video being processed. A potential disadvantage of such design is that it cannot capture the multiple visual context information of a word appearing in more than one relevant videos in training data. To tackle this limitation, we propose the Memory-Attended Recurrent Network (MARN) for video captioning, in which a memory structure is designed to explore the full-spectrum correspondence between a word and its various similar visual contexts across videos in training data. Thus, our model is able to achieve a more comprehensive understanding for each word and yield higher captioning quality. Furthermore, the built memory structure enables our method to model the compatibility between adjacent words explicitly instead of asking the model to learn implicitly, as most existing models do. Extensive validation on two real-word datasets demonstrates that our MARN consistently outperforms state-of-the-art methods.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果