作者
Chenxu Luo, Zhenheng Yang, Peng Wang, Yang Wang, Wei Xu, Ram Nevatia, Alan Yuille
发表日期
2019/7/23
期刊
IEEE Transactions on Pattern Analysis and Machine Intelligence
出版商
IEEE
简介
Learning to estimate 3D geometry in a single frame and optical flow from consecutive frames by watching unlabeled videos via deep convolutional network has made significant progress recently. Current state-of-the-art (SoTA) methods treat the two tasks independently. One typical assumption of the existing depth estimation methods is that the scenes contain no independent moving objects. while object moving could be easily modeled using optical flow. In this paper, we propose to address the two tasks as a whole, i.e., to jointly understand per-pixel 3D geometry and motion. This eliminates the need of static scene assumption and enforces the inherent geometrical consistency during the learning process, yielding significantly improved results for both tasks. We call our method as “Every Pixel Counts++” or “EPC++”. Specifically, during training, given two consecutive frames from a video, we adopt three parallel …
引用总数
20182019202020212022202320241164978786024
学术搜索中的文章
C Luo, Z Yang, P Wang, Y Wang, W Xu, R Nevatia… - IEEE transactions on pattern analysis and machine …, 2019