Px-net: Simple and efficient pixel-wise training of photometric stereo networks

F Logothetis, I Budvytis, R Mecca… - Proceedings of the …, 2021 - openaccess.thecvf.com
Proceedings of the IEEE/CVF international conference on …, 2021openaccess.thecvf.com
Retrieving accurate 3D reconstructions of objects from the way they reflect light is a very
challenging task in computer vision. Despite more than four decades since the definition of
the Photometric Stereo problem, most of the literature has had limited success when global
illumination effects such as cast shadows, self-reflections and ambient light come into play,
especially for specular surfaces. Recent approaches have leveraged the capabilities of
deep learning in conjunction with computer graphics in order to cope with the need of a vast …
Abstract
Retrieving accurate 3D reconstructions of objects from the way they reflect light is a very challenging task in computer vision. Despite more than four decades since the definition of the Photometric Stereo problem, most of the literature has had limited success when global illumination effects such as cast shadows, self-reflections and ambient light come into play, especially for specular surfaces. Recent approaches have leveraged the capabilities of deep learning in conjunction with computer graphics in order to cope with the need of a vast number of training data to invert the image irradiance equation and retrieve the geometry of the object. However, rendering global illumination effects is a slow process which can limit the amount of training data that can be generated. In this work we propose a novel pixel-wise training procedure for normal prediction by replacing the training data (observation maps) of globally rendered images with independent per-pixel generated data. We show that global physical effects can be approximated on the observation map domain and this simplifies and speeds up the data creation procedure. Our network, PX-NET, achieves state-of-the-art performance compared to other pixelwise methods on synthetic datasets, as well as the DiLiGenT real dataset on both dense and sparse light settings.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果