作者
Jonathan Krause, Justin Johnson, Ranjay Krishna, Li Fei-Fei
发表日期
2017
研讨会论文
Proceedings of the IEEE conference on computer vision and pattern recognition
页码范围
317-325
简介
Recent progress on image captioning has made it possible to generate novel sentences describing images in natural language, but compressing an image into a single sentence can describe visual content in only coarse detail. While one new captioning approach, dense captioning, can potentially describe images in finer levels of detail by captioning many regions within an image, it in turn is unable to produce a coherent story for an image. In this paper we overcome these limitations by generating entire paragraphs for describing images, which can tell detailed, unified stories. We develop a model that decomposes both images and paragraphs into their constituent parts, detecting semantic regions in images and using a hierarchical recurrent neural network to reason about language. Linguistic analysis confirms the complexity of the paragraph generation task, and thorough experiments on a new dataset of image and paragraph pairs demonstrate the effectiveness of our approach.
引用总数
201720182019202020212022202320241242814580527645
学术搜索中的文章
J Krause, J Johnson, R Krishna, L Fei-Fei - Proceedings of the IEEE conference on computer …, 2017