作者
Xu Yang, Kaihua Tang, Hanwang Zhang, Jianfei Cai
发表日期
2019
研讨会论文
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
页码范围
10685-10694
简介
We propose Scene Graph Auto-Encoder (SGAE) that incorporates the language inductive bias into the encoder-decoder image captioning framework for more human-like captions. Intuitively, we humans use the inductive bias to compose collocations and contextual inference in discourse. For example, when we see the relation" person on bike", it is natural to replace" on" with" ride" and infer" person riding bike on a road" even the" road" is not evident. Therefore, exploiting such bias as a language prior is expected to help the conventional encoder-decoder models less likely to overfit to the dataset bias and focus on reasoning. Specifically, we use the scene graph---a directed graph (G) where an object node is connected by adjective nodes and relationship nodes---to represent the complex structural layout of both image (I) and sentence (S). In the textual domain, we use SGAE to learn a dictionary (D) that helps to reconstruct sentences in the S-> G-> D-> S pipeline, where D encodes the desired language prior; in the vision-language domain, we use the shared D to guide the encoder-decoder in the I-> G-> D-> S pipeline. Thanks to the scene graph representation and shared dictionary, the inductive bias is transferred across domains in principle. We validate the effectiveness of SGAE on the challenging MS-COCO image captioning benchmark, eg, our SGAE-based single-model achieves a new state-of-the-art 127.8 CIDEr-D on the Karpathy split, and a competitive 125.5 CIDEr-D (c40) on the official server even compared to other ensemble models. Code has been made available at: https://github. com/yangxuntu/SGAE.
引用总数
学术搜索中的文章