作者
Xianhang Li, Yali Wang, Zhipeng Zhou, Yu Qiao
发表日期
2020
研讨会论文
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
页码范围
1092-1101
简介
Temporal convolution has been widely used for video classification. However, it is performed on spatio-temporal contexts in a limited view, which often weakens its capacity of learning video representation. To alleviate this problem, we propose a concise and novel SmallBig network, with the cooperation of small and big views. For the current time step, the small view branch is used to learn the core semantics, while the big view branch is used to capture the contextual semantics. Unlike traditional temporal convolution, the big view branch can provide the small view branch with the most activated video features from a broader 3D receptive field. Via aggregating such big-view contexts, the small view branch can learn more robust and discriminative spatio-temporal representations for video classification. Furthermore, we propose to share convolution in the small and big view branch, which improves model compactness as well as alleviates overfitting. As a result, our SmallBigNet achieves a comparable model size like 2D CNNs, while boosting accuracy like 3D CNNs. We conduct extensive experiments on the large-scale video benchmarks, eg, Kinetics400, Something-Something V1 and V2. Our SmallBig network outperforms a number of recent state-of-the-art approaches, in terms of accuracy and/or efficiency. The codes and models will be available on https://github. com/xhl-video/SmallBigNet.
引用总数
2019202020212022202320241321402919
学术搜索中的文章
X Li, Y Wang, Z Zhou, Y Qiao - Proceedings of the IEEE/CVF conference on computer …, 2020