Toward Better SSIM Loss for Unsupervised Monocular Depth Estimation

Y Cao, F Luo, Y Li - International Conference on Image and Graphics, 2023 - Springer
International Conference on Image and Graphics, 2023Springer
Unsupervised monocular depth learning generally relies on the photometric relation among
temporally adjacent images. Most of previous works use both mean absolute error (MAE)
and structure similarity index measure (SSIM) with conventional form as training loss.
However, they ignore the effect of different components in the SSIM function and the
corresponding hyperparameters on the training. To address these issues, this work
proposes a new form of SSIM. Compared with original SSIM function, the proposed new …
Abstract
Unsupervised monocular depth learning generally relies on the photometric relation among temporally adjacent images. Most of previous works use both mean absolute error (MAE) and structure similarity index measure (SSIM) with conventional form as training loss. However, they ignore the effect of different components in the SSIM function and the corresponding hyperparameters on the training. To address these issues, this work proposes a new form of SSIM. Compared with original SSIM function, the proposed new form uses addition rather than multiplication to combine the luminance, contrast, and structural similarity related components in SSIM. The loss function constructed with this scheme helps result in smoother gradients and achieve higher performance on unsupervised depth estimation. We conduct extensive experiments to determine the relatively optimal combination of parameters for our new SSIM. Based on the popular MonoDepth approach, the optimized SSIM loss function can remarkably outperform the baseline on the KITTI-2015 outdoor dataset.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果