Mixture of Contexts for Long Video Generation
- URL: http://arxiv.org/abs/2508.21058v1
- Date: Thu, 28 Aug 2025 17:57:55 GMT
- Title: Mixture of Contexts for Long Video Generation
- Authors: Shengqu Cai, Ceyuan Yang, Lvmin Zhang, Yuwei Guo, Junfei Xiao, Ziyan Yang, Yinghao Xu, Zhenheng Yang, Alan Yuille, Leonidas Guibas, Maneesh Agrawala, Lu Jiang, Gordon Wetzstein,
- Abstract summary: We recast long-context video generation as an internal information retrieval task.<n>We propose a simple, learnable sparse attention routing module, Mixture of Contexts (MoC), as an effective long-term memory retrieval engine.<n>As we scale the data and gradually sparsify the routing, the model allocates compute to salient history, preserving identities, actions, and scenes over minutes of content.
- Score: 72.96361488755986
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Long video generation is fundamentally a long context memory problem: models must retain and retrieve salient events across a long range without collapsing or drifting. However, scaling diffusion transformers to generate long-context videos is fundamentally limited by the quadratic cost of self-attention, which makes memory and computation intractable and difficult to optimize for long sequences. We recast long-context video generation as an internal information retrieval task and propose a simple, learnable sparse attention routing module, Mixture of Contexts (MoC), as an effective long-term memory retrieval engine. In MoC, each query dynamically selects a few informative chunks plus mandatory anchors (caption, local windows) to attend to, with causal routing that prevents loop closures. As we scale the data and gradually sparsify the routing, the model allocates compute to salient history, preserving identities, actions, and scenes over minutes of content. Efficiency follows as a byproduct of retrieval (near-linear scaling), which enables practical training and synthesis, and the emergence of memory and consistency at the scale of minutes.
Related papers
- MemFlow: Flowing Adaptive Memory for Consistent and Efficient Long Video Narratives [54.07515675393396]
Existing solutions maintain the memory by compressing historical frames with predefined strategies.<n>We propose MemFlow to address this problem.<n>MemFlow achieves outstanding long-context consistency with negligible burden.
arXiv Detail & Related papers (2025-12-16T18:59:59Z) - VideoMem: Enhancing Ultra-Long Video Understanding via Adaptive Memory Management [17.645183933549458]
VideoMem is a novel framework that pioneers models long video understanding as a sequential generation task via adaptive memory management.<n>We show that VideoMem significantly outperforms existing open-source models across diverse benchmarks for ultra-long video understanding tasks.
arXiv Detail & Related papers (2025-12-04T07:42:13Z) - WorldMM: Dynamic Multimodal Memory Agent for Long Video Reasoning [66.24870234484668]
We introduce WorldMM, a novel multimodal memory agent that constructs and retrieves from multiple complementary memories.<n>WorldMM significantly outperforms existing baselines across five long video question-answering benchmarks.
arXiv Detail & Related papers (2025-12-02T05:14:52Z) - Pack and Force Your Memory: Long-form and Consistent Video Generation [26.53691150499802]
Long-form video generation presents a dual challenge.<n>Models must capture long-range dependencies while preventing the error accumulation inherent in autoregressive decoding.<n>MemoryPack and Direct Forcing substantially enhance the context consistency and reliability of long-form video generation.
arXiv Detail & Related papers (2025-10-02T08:22:46Z) - Towards Chunk-Wise Generation for Long Videos [40.93693702874981]
We conduct a survey on long video generation with the autoregressive chunk-by-chunk strategy.<n>We address common problems caused by applying short imagechunk-to-video models to long video tasks.
arXiv Detail & Related papers (2024-11-27T16:13:26Z) - ReWind: Understanding Long Videos with Instructed Learnable Memory [8.002949551539297]
Vision-Language Models (VLMs) are crucial for applications requiring integrated understanding textual and visual information.<n>We introduce ReWind, a novel memory-based VLM designed for efficient long video understanding while preserving temporal fidelity.<n>We empirically demonstrate ReWind's superior performance in visual question answering (VQA) and temporal grounding tasks, surpassing previous methods on long video benchmarks.
arXiv Detail & Related papers (2024-11-23T13:23:22Z) - SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation [153.46240555355408]
SlowFast-VGen is a novel dual-speed learning system for action-driven long video generation.
Our approach incorporates a conditional video diffusion model for the slow learning of world dynamics.
We propose a slow-fast learning loop algorithm that seamlessly integrates the inner fast learning loop into the outer slow learning loop.
arXiv Detail & Related papers (2024-10-30T17:55:52Z) - LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding [65.46303012350207]
LongVU is an adaptive compression mechanism that reduces the number of video tokens while preserving visual details of long videos.
We leverage DINOv2 features to remove redundant frames that exhibit high similarity.
We perform spatial token reduction across frames based on their temporal dependencies.
arXiv Detail & Related papers (2024-10-22T21:21:37Z) - Enhancing Long Video Understanding via Hierarchical Event-Based Memory [9.800516656566774]
We propose a Hierarchical Event-based Memory-enhanced LLM (HEM-LLM) for better understanding of long videos.
Firstly, we design a novel adaptive sequence segmentation scheme to divide multiple events within long videos.
Secondly, while modeling current event, we compress and inject the information of the previous event to enhance the long-term inter-event dependencies in videos.
arXiv Detail & Related papers (2024-09-10T07:53:10Z) - Streaming Long Video Understanding with Large Language Models [83.11094441893435]
VideoStreaming is an advanced vision-language large model (VLLM) for video understanding.
It capably understands arbitrary-length video with a constant number of video streaming tokens encoded and propagatedly selected.
Our model achieves superior performance and higher efficiency on long video benchmarks.
arXiv Detail & Related papers (2024-05-25T02:22:09Z) - Generating Long Videos of Dynamic Scenes [66.56925105992472]
We present a video generation model that reproduces object motion, changes in camera viewpoint, and new content that arises over time.
A common failure case is for content to never change due to over-reliance on inductive biases to provide temporal consistency.
arXiv Detail & Related papers (2022-06-07T16:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.