作者
Ruixu Liu, Ju Shen, He Wang, Chen Chen, Sen-ching Cheung, Vijayan Asari
发表日期
2020
研讨会论文
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
页码范围
5064-5073
简介
We propose a novel attention-based framework for 3D human pose estimation from a monocular video. Despite the general success of end-to-end deep learning paradigms, our approach is based on two key observations:(1) temporal incoherence and jitter are often yielded from a single frame prediction;(2) error rate can be remarkably reduced by increasing the receptive field in a video. Therefore, we design an attentional mechanism to adaptively identify significant frames and tensor outputs from each deep neural net layer, leading to a more optimal estimation. To achieve large temporal receptive fields, multi-scale dilated convolutions are employed to model long-range dependencies among frames. The architecture is straightforward to implement and can be flexibly adopted for real-time applications. Any off-the-shelf 2D pose estimation system, eg Mocap libraries, can be easily integrated in an ad-hoc fashion. We both quantitatively and qualitatively evaluate our method on various standard benchmark datasets (eg Human3. 6M, HumanEva). Our method considerably outperforms all the state-of-the-art algorithms up to 8% error reduction (average mean per joint position error: 34.7) as compared to the best-reported results. Code is available at:(https://github. com/lrxjason/Attention3DHumanPose)
引用总数
20202021202220232024223488540
学术搜索中的文章
R Liu, J Shen, H Wang, C Chen, S Cheung, V Asari - Proceedings of the IEEE/CVF conference on computer …, 2020