Expanding Language-Image Pretrained Models for General Video Recognition
- URL: http://arxiv.org/abs/2208.02816v1
- Date: Thu, 4 Aug 2022 17:59:54 GMT
- Title: Expanding Language-Image Pretrained Models for General Video Recognition
- Authors: Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng,
Jianlong Fu, Shiming Xiang, Haibin Ling
- Abstract summary: Contrastive language-image pretraining has shown great success in learning visual-textual joint representation from web-scale data.
We present a simple yet effective approach that adapts the pretrained language-image models to video recognition directly.
Our approach surpasses the current state-of-the-art methods by +7.6% and +14.9% in terms of top-1 accuracy under two popular protocols.
- Score: 136.0948049010682
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrastive language-image pretraining has shown great success in learning
visual-textual joint representation from web-scale data, demonstrating
remarkable "zero-shot" generalization ability for various image tasks. However,
how to effectively expand such new language-image pretraining methods to video
domains is still an open problem. In this work, we present a simple yet
effective approach that adapts the pretrained language-image models to video
recognition directly, instead of pretraining a new model from scratch. More
concretely, to capture the long-range dependencies of frames along the temporal
dimension, we propose a cross-frame attention mechanism that explicitly
exchanges information across frames. Such module is lightweight and can be
plugged into pretrained language-image models seamlessly. Moreover, we propose
a video-specific prompting scheme, which leverages video content information
for generating discriminative textual prompts. Extensive experiments
demonstrate that our approach is effective and can be generalized to different
video recognition scenarios. In particular, under fully-supervised settings,
our approach achieves a top-1 accuracy of 87.1% on Kinectics-400, while using
12 times fewer FLOPs compared with Swin-L and ViViT-H. In zero-shot
experiments, our approach surpasses the current state-of-the-art methods by
+7.6% and +14.9% in terms of top-1 accuracy under two popular protocols. In
few-shot scenarios, our approach outperforms previous best methods by +32.1%
and +23.1% when the labeled data is extremely limited. Code and models are
available at https://aka.ms/X-CLIP
Related papers
- Pretrained Image-Text Models are Secretly Video Captioners [38.66202065611397]
We find that an image-based model can be repurposed to outperform several specialised video captioning systems.
Our adapted model demonstrates top tier performance on major benchmarks, ranking 2nd on MSRVTT and MSVD, and 3rd on VATEX.
From a resource optimization perspective, this video captioning study focuses on three fundamental factors: optimizing model scale, maximizing data efficiency, and incorporating reinforcement learning.
arXiv Detail & Related papers (2025-02-19T01:53:03Z) - RETTA: Retrieval-Enhanced Test-Time Adaptation for Zero-Shot Video Captioning [69.23782518456932]
We propose a novel zero-shot video captioning framework named Retrieval-Enhanced Test-Time Adaptation (RETTA)
We bridge video and text using four key models: a general video-text retrieval model XCLIP, a general image-text matching model CLIP, a text alignment model AnglE, and a text generation model GPT-2.
To address this problem, we propose using learnable tokens as a communication medium among these four frozen models GPT-2, XCLIP, CLIP, and AnglE.
arXiv Detail & Related papers (2024-05-11T16:22:00Z) - Building an Open-Vocabulary Video CLIP Model with Better Architectures,
Optimization and Data [102.0069667710562]
This paper presents Open-VCLIP++, a framework that adapts CLIP to a strong zero-shot video classifier.
We demonstrate that training Open-VCLIP++ is tantamount to continual learning with zero historical data.
Our approach is evaluated on three widely used action recognition datasets.
arXiv Detail & Related papers (2023-10-08T04:46:43Z) - Open-VCLIP: Transforming CLIP to an Open-vocabulary Video Model via
Interpolated Weight Optimization [82.75718846187685]
We introduce Open-VCLIP, a simple yet effective approach that transforms CLIP into a strong zero-shot video classifier.
We show that training an Open-VCLIP is equivalent to continual learning with zero historical data.
In particular, we achieve 87.9%, 58.3%, 81.1% zero-shot accuracy on UCF, HMDB and Kinetics-600 datasets.
arXiv Detail & Related papers (2023-02-01T17:44:17Z) - Bidirectional Cross-Modal Knowledge Exploration for Video Recognition
with Pre-trained Vision-Language Models [149.1331903899298]
We propose a novel framework called BIKE, which utilizes the cross-modal bridge to explore bidirectional knowledge.
We present a Temporal Concept Spotting mechanism that uses the Text-to-Video expertise to capture temporal saliency in a parameter-free manner.
Our best model achieves a state-of-the-art accuracy of 88.6% on the challenging Kinetics-400 using the released CLIP model.
arXiv Detail & Related papers (2022-12-31T11:36:53Z) - Frozen CLIP Models are Efficient Video Learners [86.73871814176795]
Video recognition has been dominated by the end-to-end learning paradigm.
Recent advances in Contrastive Vision-Language Pre-training pave the way for a new route for visual recognition tasks.
We present Efficient Video Learning -- an efficient framework for directly training high-quality video recognition models.
arXiv Detail & Related papers (2022-08-06T17:38:25Z) - Learning Spatiotemporal Features via Video and Text Pair Discrimination [30.64670449131973]
Cross-modal pair (CPD) framework captures correlation between video and its associated text.
We train our CPD models on both standard video dataset (Kinetics-210k) and uncurated web video dataset (-300k) to demonstrate its effectiveness.
arXiv Detail & Related papers (2020-01-16T08:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.