作者
Ce Zheng, Sijie Zhu, Matias Mendieta, Taojiannan Yang, Chen Chen, Zhengming Ding
发表日期
2021
研讨会论文
Proceedings of the IEEE/CVF international conference on computer vision
页码范围
11656-11665
简介
Transformer architectures have become the model of choice in natural language processing and are now being introduced into computer vision tasks such as image classification, object detection, and semantic segmentation. However, in the field of human pose estimation, convolutional architectures still remain dominant. In this work, we present PoseFormer, a purely transformer-based approach for 3D human pose estimation in videos without convolutional architectures involved. Inspired by recent developments in vision transformers, we design a spatial-temporal transformer structure to comprehensively model the human joint relations within each frame as well as the temporal correlations across frames, then output an accurate 3D human pose of the center frame. We quantitatively and qualitatively evaluate our method on two popular and standard benchmark datasets: Human3. 6M and MPI-INF-3DHP. Extensive experiments show that PoseFormer achieves state-of-the-art performance on both datasets. Our code and model will be publicly available.
引用总数
学术搜索中的文章
C Zheng, S Zhu, M Mendieta, T Yang, C Chen, Z Ding - Proceedings of the IEEE/CVF international conference …, 2021
C Zheng, S Zhu, M Mendieta, T Yang, C Chen, Z Ding - CVF International Conference on Computer Vision …