Doodle to search: Practical zero-shot sketch-based image retrieval

S Dey, P Riba, A Dutta, J Llados… - Proceedings of the …, 2019 - openaccess.thecvf.com
Proceedings of the IEEE/CVF conference on computer vision and …, 2019openaccess.thecvf.com
In this paper, we investigate the problem of zero-shot sketch-based image retrieval (ZS-
SBIR), where human sketches are used as queries to conduct retrieval of photos from
unseen categories. We importantly advance prior arts by proposing a novel ZS-SBIR
scenario that represents a firm step forward in its practical application. The new setting
uniquely recognizes two important yet often neglected challenges of practical ZS-SBIR,(i)
the large domain gap between amateur sketch and photo, and (ii) the necessity for moving …
Abstract
In this paper, we investigate the problem of zero-shot sketch-based image retrieval (ZS-SBIR), where human sketches are used as queries to conduct retrieval of photos from unseen categories. We importantly advance prior arts by proposing a novel ZS-SBIR scenario that represents a firm step forward in its practical application. The new setting uniquely recognizes two important yet often neglected challenges of practical ZS-SBIR,(i) the large domain gap between amateur sketch and photo, and (ii) the necessity for moving towards large-scale retrieval. We first contribute to the community a novel ZS-SBIR dataset, QuickDraw-Extended, that consists of 330,000 sketches and 204,000 photos spanning across 110 categories. Highly abstract amateur human sketches are purposefully sourced to maximize the domain gap, instead of ones included in existing datasets that can often be semi-photorealistic. We then formulate a ZS-SBIR framework to jointly model sketches and photos into a common embedding space. A novel strategy to mine the mutual information among domains is specifically engineered to alleviate the domain gap. External semantic knowledge is further embedded to aid semantic transfer. We show that, rather surprisingly, retrieval performance significantly outperforms that of state-of-the-art on existing datasets that can already be achieved using a reduced version of our model. We further demonstrate the superior performance of our full model by comparing with a number of alternatives on the newly proposed dataset. The new dataset, plus all training and testing code of our model, will be publicly released to facilitate future research.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果