作者
Max Bain, Arsha Nagrani, Gül Varol, Andrew Zisserman
发表日期
2021
研讨会论文
Proceedings of the IEEE/CVF international conference on computer vision
页码范围
1728-1738
简介
Our objective in this work is video-text retrieval-in particular a joint embedding that enables efficient text-to-video retrieval. The challenges in this area include the design of the visual architecture and the nature of the training data, in that the available large scale video-text training datasets, such as HowTo100M, are noisy and hence competitive performance is achieved only at scale through large amounts of compute. We address both these challenges in this paper. We propose an end-to-end trainable model that is designed to take advantage of both large-scale image and video captioning datasets. Our model is an adaptation and extension of the recent ViT and Timesformer architectures, and consists of attention in both space and time. The model is flexible and can be trained on both image and video text datasets, either independently or in conjunction. It is trained with a curriculum learning schedule that begins by treating images as' frozen'snapshots of video, and then gradually learns to attend to increasing temporal context when trained on video datasets. We also provide a new video-text pretraining dataset WebVid-2M, comprised of over two million videos with weak captions scraped from the internet. Despite training on datasets that are an order of magnitude smaller, we show that this approach yields state-of-the-art results on standard downstream video-retrieval benchmarks including MSR-VTT, DiDeMo and MSVD.
引用总数
学术搜索中的文章
M Bain, A Nagrani, G Varol, A Zisserman - Proceedings of the IEEE/CVF international conference …, 2021