Cross-domain self-supervised multi-task feature learning using synthetic imagery

Z Ren, YJ Lee - Proceedings of the IEEE conference on …, 2018 - openaccess.thecvf.com
Proceedings of the IEEE conference on computer vision and …, 2018openaccess.thecvf.com
In human learning, it is common to use multiple sources of information jointly. However, most
existing feature learning approaches learn from only a single task. In this paper, we propose
a novel multi-task deep network to learn generalizable high-level visual representations.
Since multi-task learning requires annotations for multiple properties of the same training
instance, we look to synthetic images to train our network. To overcome the domain
difference between real and synthetic data, we employ an unsupervised feature space …
Abstract
In human learning, it is common to use multiple sources of information jointly. However, most existing feature learning approaches learn from only a single task. In this paper, we propose a novel multi-task deep network to learn generalizable high-level visual representations. Since multi-task learning requires annotations for multiple properties of the same training instance, we look to synthetic images to train our network. To overcome the domain difference between real and synthetic data, we employ an unsupervised feature space domain adaptation method based on adversarial learning. Given an input synthetic RGB image, our network simultaneously predicts its surface normal, depth, and instance contour, while also minimizing the feature space domain differences between real and synthetic data. Through extensive experiments, we demonstrate that our network learns more transferable representations compared to single-task baselines. Our learned representation produces state-of-the-art transfer learning results on PASCAL VOC 2007 classification and 2012 detection.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
查找
获取 PDF 文件
引用
References