VideoMamba: State Space Model for Efficient Video Understanding
- URL: http://arxiv.org/abs/2403.06977v2
- Date: Tue, 12 Mar 2024 15:22:52 GMT
- Title: VideoMamba: State Space Model for Efficient Video Understanding
- Authors: Kunchang Li, Xinhao Li, Yi Wang, Yinan He, Yali Wang, Limin Wang, and
Yu Qiao
- Abstract summary: VideoMamba overcomes the limitations of existing 3D convolution neural networks and video transformers.
Its linear-complexity operator enables efficient long-term modeling.
VideoMamba sets a new benchmark for video understanding.
- Score: 46.17083617091239
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Addressing the dual challenges of local redundancy and global dependencies in
video understanding, this work innovatively adapts the Mamba to the video
domain. The proposed VideoMamba overcomes the limitations of existing 3D
convolution neural networks and video transformers. Its linear-complexity
operator enables efficient long-term modeling, which is crucial for
high-resolution long video understanding. Extensive evaluations reveal
VideoMamba's four core abilities: (1) Scalability in the visual domain without
extensive dataset pretraining, thanks to a novel self-distillation technique;
(2) Sensitivity for recognizing short-term actions even with fine-grained
motion differences; (3) Superiority in long-term video understanding,
showcasing significant advancements over traditional feature-based models; and
(4) Compatibility with other modalities, demonstrating robustness in
multi-modal contexts. Through these distinct advantages, VideoMamba sets a new
benchmark for video understanding, offering a scalable and efficient solution
for comprehensive video understanding. All the code and models are available at
https://github.com/OpenGVLab/VideoMamba.
Related papers
- Realizing Video Summarization from the Path of Language-based Semantic Understanding [19.825666473712197]
We propose a novel video summarization framework inspired by the Mixture of Experts (MoE) paradigm.
Our approach integrates multiple VideoLLMs to generate comprehensive and coherent textual summaries.
arXiv Detail & Related papers (2024-10-06T15:03:22Z) - Mamba Fusion: Learning Actions Through Questioning [12.127052057927182]
Video Language Models (VLMs) are crucial for generalizing across diverse tasks and using language cues to enhance learning.
We introduce MambaVL, a novel model that efficiently captures long-range dependencies and learn joint representations for vision and language data.
MambaVL achieves state-of-the-art performance in action recognition on the Epic-Kitchens-100 dataset.
arXiv Detail & Related papers (2024-09-17T19:36:37Z) - VideoMamba: Spatio-Temporal Selective State Space Model [18.310796559944347]
VideoMamba is a novel adaptation of the pure Mamba architecture, specifically designed for video recognition.
VideoMamba is not only resource-efficient but also effective in capturing long-range dependency in videos.
Our work highlights the potential of VideoMamba as a powerful tool for video understanding, offering a simple yet effective baseline for future research in video analysis.
arXiv Detail & Related papers (2024-07-11T13:11:21Z) - Visual Mamba: A Survey and New Outlooks [33.90213491829634]
Mamba, a recent selective structured state space model, excels in long sequence modeling.
Since January 2024, Mamba has been actively applied to diverse computer vision tasks.
This paper reviews visual Mamba approaches, analyzing over 200 papers.
arXiv Detail & Related papers (2024-04-29T16:51:30Z) - Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding [49.88140766026886]
State space model, Mamba, shows promising traits to extend its success in long sequence modeling to video modeling.
We conduct a comprehensive set of studies, probing different roles Mamba can play in modeling videos, while investigating diverse tasks where Mamba could exhibit superiority.
Our experiments reveal the strong potential of Mamba on both video-only and video-language tasks while showing promising efficiency-performance trade-offs.
arXiv Detail & Related papers (2024-03-14T17:57:07Z) - Vivim: a Video Vision Mamba for Medical Video Segmentation [52.11785024350253]
This paper presents a Video Vision Mamba-based framework, dubbed as Vivim, for medical video segmentation tasks.
Our Vivim can effectively compress the long-term representation into sequences at varying scales.
Experiments on thyroid segmentation, breast lesion segmentation in ultrasound videos, and polyp segmentation in colonoscopy videos demonstrate the effectiveness and efficiency of our Vivim.
arXiv Detail & Related papers (2024-01-25T13:27:03Z) - Multi-Modal Video Topic Segmentation with Dual-Contrastive Domain
Adaptation [74.51546366251753]
Video topic segmentation unveils the coarse-grained semantic structure underlying videos.
We introduce a multi-modal video topic segmenter that utilizes both video transcripts and frames.
Our proposed solution significantly surpasses baseline methods in terms of both accuracy and transferability.
arXiv Detail & Related papers (2023-11-30T21:59:05Z) - MVBench: A Comprehensive Multi-modal Video Understanding Benchmark [63.14000659130736]
We introduce a comprehensive Multi-modal Video understanding Benchmark, namely MVBench.
We first introduce a novel static-to-dynamic method to define these temporal-related tasks.
Then, guided by the task definition, we automatically convert public video annotations into multiple-choice QA to evaluate each task.
arXiv Detail & Related papers (2023-11-28T17:59:04Z) - MVFNet: Multi-View Fusion Network for Efficient Video Recognition [79.92736306354576]
We introduce a multi-view fusion (MVF) module to exploit video complexity using separable convolution for efficiency.
MVFNet can be thought of as a generalized video modeling framework.
arXiv Detail & Related papers (2020-12-13T06:34:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.