作者
Liang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu, Alan L Yuille
发表日期
2016
研讨会论文
Proceedings of the IEEE conference on computer vision and pattern recognition
页码范围
3640-3649
简介
Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixel-wise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms average-and max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.
引用总数
201620172018201920202021202220232024298517924026127824221575
学术搜索中的文章
LC Chen, Y Yang, J Wang, W Xu, AL Yuille - Proceedings of the IEEE conference on computer …, 2016