Learning self-prior for mesh denoising using dual graph convolutional networks

S Hattori, T Yatagawa, Y Ohtake, H Suzuki - European Conference on …, 2022 - Springer
S Hattori, T Yatagawa, Y Ohtake, H Suzuki
European Conference on Computer Vision, 2022Springer
This study proposes a deep-learning framework for mesh denoising from a single noisy
input, where two graph convolutional networks are trained jointly to filter vertex positions and
facet normals apart. The prior obtained only from a single input is particularly referred to as a
self-prior. The proposed method leverages the framework of the deep image prior (DIP),
which obtains the self-prior for image restoration using a convolutional neural network
(CNN). Thus, we obtain a denoised mesh without any ground-truth noise-free meshes …
Abstract
This study proposes a deep-learning framework for mesh denoising from a single noisy input, where two graph convolutional networks are trained jointly to filter vertex positions and facet normals apart. The prior obtained only from a single input is particularly referred to as a self-prior. The proposed method leverages the framework of the deep image prior (DIP), which obtains the self-prior for image restoration using a convolutional neural network (CNN). Thus, we obtain a denoised mesh without any ground-truth noise-free meshes. Compared to the original DIP that transforms a fixed random code into a noise-free image by the neural network, we reproduce vertex displacement from a fixed random code and reproduce facet normals from feature vectors that summarize local triangle arrangements. After tuning several hyperparameters with a few validation samples, our method achieved significantly higher performance than traditional approaches working with a single noisy input mesh. Moreover, its performance is better than the other methods using deep neural networks trained with a large-scale shape dataset. The independence of our method of either large-scale datasets or ground-truth noise-free mesh will allow us to easily denoise meshes whose shapes are rarely included in the shape datasets. Our code is available at: https://github.com/astaka-pe/Dual-DMP.git.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果