M4V: Multi-Modal Mamba for Text-to-Video Generation
- URL: http://arxiv.org/abs/2506.10915v1
- Date: Thu, 12 Jun 2025 17:29:40 GMT
- Title: M4V: Multi-Modal Mamba for Text-to-Video Generation
- Authors: Jiancheng Huang, Gengwei Zhang, Zequn Jie, Siyu Jiao, Yinlong Qian, Ling Chen, Yunchao Wei, Lin Ma,
- Abstract summary: Text-to-video generation has enriched content and holds potential to create powerful world simulators.<n>Modeling the vast space remains computationally demanding, particularly when employing quadratic in sequence processing.<n>We introduce a Multi-Modal Mamba framework for text-to-video generation.<n>Experiments on text-to-video benchmarks demonstrate M4V's ability to produce high-quality videos while significantly lowering computational costs.
- Score: 58.51139515986472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-to-video generation has significantly enriched content creation and holds the potential to evolve into powerful world simulators. However, modeling the vast spatiotemporal space remains computationally demanding, particularly when employing Transformers, which incur quadratic complexity in sequence processing and thus limit practical applications. Recent advancements in linear-time sequence modeling, particularly the Mamba architecture, offer a more efficient alternative. Nevertheless, its plain design limits its direct applicability to multi-modal and spatiotemporal video generation tasks. To address these challenges, we introduce M4V, a Multi-Modal Mamba framework for text-to-video generation. Specifically, we propose a multi-modal diffusion Mamba (MM-DiM) block that enables seamless integration of multi-modal information and spatiotemporal modeling through a multi-modal token re-composition design. As a result, the Mamba blocks in M4V reduce FLOPs by 45% compared to the attention-based alternative when generating videos at 768$\times$1280 resolution. Additionally, to mitigate the visual quality degradation in long-context autoregressive generation processes, we introduce a reward learning strategy that further enhances per-frame visual realism. Extensive experiments on text-to-video benchmarks demonstrate M4V's ability to produce high-quality videos while significantly lowering computational costs. Code and models will be publicly available at https://huangjch526.github.io/M4V_project.
Related papers
- VSRM: A Robust Mamba-Based Framework for Video Super-Resolution [1.8506868409351092]
Video super-resolution remains a major challenge in low-level vision tasks.<n>In this work, we propose VSRM, a novel framework for processing long sequences in video.<n> VSRM achieves state-of-the-art results on diverse benchmarks, establishing itself as a solid foundation for future research.
arXiv Detail & Related papers (2025-06-28T05:51:42Z) - MLVTG: Mamba-Based Feature Alignment and LLM-Driven Purification for Multi-Modal Video Temporal Grounding [13.025856914576673]
Video Temporal Grounding aims to localize video clips corresponding to natural language queries.<n>Existing Transformer-based methods often suffer from redundant attention and suboptimal multi-modal alignment.<n>We propose MLVTG, a novel framework that integrates two key modules: MambaAligner and LLMRefiner.
arXiv Detail & Related papers (2025-06-10T07:20:12Z) - OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts [46.77966058862399]
We introduce OmniMMI, a comprehensive multi-modal interaction benchmark tailored for OmniLLMs in streaming video contexts.<n>We propose a novel framework, Multi-modal Multiplexing Modeling (M4), designed to enable an inference-efficient streaming model that can see, listen while generating.
arXiv Detail & Related papers (2025-03-29T02:46:58Z) - Mobile-VideoGPT: Fast and Accurate Video Understanding Language Model [60.171601995737646]
Mobile-VideoGPT is an efficient multimodal framework for video understanding.<n>It consists of lightweight dual visual encoders, efficient projectors, and a small language model (SLM)<n>Our results show that Mobile-VideoGPT-0.5B can generate up to 46 tokens per second.
arXiv Detail & Related papers (2025-03-27T17:59:58Z) - VideoMAP: Toward Scalable Mamba-based Video Autoregressive Pretraining [31.44538839153902]
VideoMAP is a Hybrid Mamba-Transformer framework featuring a novel pre-training approach.<n>We show that VideoMAP exhibits impressive sample efficiency, significantly outperforming existing methods with less training data.<n>We also demonstrate the potential of VideoMAP as a visual encoder for multimodal large language models.
arXiv Detail & Related papers (2025-03-16T03:01:07Z) - Token-Efficient Long Video Understanding for Multimodal LLMs [101.70681093383365]
STORM is a novel architecture incorporating a dedicated temporal encoder between the image encoder and the Video-LLMs.<n>We show that STORM achieves state-of-the-art results across various long video understanding benchmarks.
arXiv Detail & Related papers (2025-03-06T06:17:38Z) - Multimodal Mamba: Decoder-only Multimodal State Space Model via Quadratic to Linear Distillation [36.44678935063189]
mmMamba is a framework for developing linear-complexity native multimodal state space models.<n>Our approach enables the direct conversion of trained decoder-only MLLMs to linear-complexity architectures.
arXiv Detail & Related papers (2025-02-18T18:59:57Z) - DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Longer Video Generation [54.30327187663316]
DiTCtrl is a training-free multi-prompt video generation method under MM-DiT architectures for the first time.<n>We analyze MM-DiT's attention mechanism, finding that the 3D full attention behaves similarly to that of the cross/self-attention blocks in the UNet-like diffusion models.<n>Based on our careful design, the video generated by DiTCtrl achieves smooth transitions and consistent object motion given multiple sequential prompts.
arXiv Detail & Related papers (2024-12-24T18:51:19Z) - MobileMamba: Lightweight Multi-Receptive Visual Mamba Network [51.33486891724516]
Previous research on lightweight models has primarily focused on CNNs and Transformer-based designs.
We propose the MobileMamba framework, which balances efficiency and performance.
MobileMamba achieves up to 83.6% on Top-1, surpassing existing state-of-the-art methods.
arXiv Detail & Related papers (2024-11-24T18:01:05Z) - When Video Coding Meets Multimodal Large Language Models: A Unified Paradigm for Video Coding [118.72266141321647]
Cross-Modality Video Coding (CMVC) is a pioneering approach to explore multimodality representation and video generative models in video coding.<n>During decoding, previously encoded components and video generation models are leveraged to create multiple encoding-decoding modes.<n>Experiments indicate that TT2V achieves effective semantic reconstruction, while IT2V exhibits competitive perceptual consistency.
arXiv Detail & Related papers (2024-08-15T11:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.