Uniform Discrete Diffusion with Metric Path for Video Generation
- URL: http://arxiv.org/abs/2510.24717v1
- Date: Tue, 28 Oct 2025 17:59:57 GMT
- Title: Uniform Discrete Diffusion with Metric Path for Video Generation
- Authors: Haoge Deng, Ting Pan, Fan Zhang, Yang Liu, Zhuoyan Luo, Yufeng Cui, Wenxuan Wang, Chunhua Shen, Shiguang Shan, Zhaoxiang Zhang, Xinlong Wang,
- Abstract summary: Continuous-space video generation has advanced rapidly, while discrete approaches lag behind due to error accumulation and long-duration inconsistency.<n>We present Uniform generative modeling and present Uniform pAth (URSA), a powerful framework that bridges the gap with continuous approaches for scalable video generation.<n>URSA consistently outperforms existing discrete methods and achieves performance comparable to state-of-the-art continuous diffusion methods.
- Score: 103.86033350602908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continuous-space video generation has advanced rapidly, while discrete approaches lag behind due to error accumulation and long-context inconsistency. In this work, we revisit discrete generative modeling and present Uniform discRete diffuSion with metric pAth (URSA), a simple yet powerful framework that bridges the gap with continuous approaches for the scalable video generation. At its core, URSA formulates the video generation task as an iterative global refinement of discrete spatiotemporal tokens. It integrates two key designs: a Linearized Metric Path and a Resolution-dependent Timestep Shifting mechanism. These designs enable URSA to scale efficiently to high-resolution image synthesis and long-duration video generation, while requiring significantly fewer inference steps. Additionally, we introduce an asynchronous temporal fine-tuning strategy that unifies versatile tasks within a single model, including interpolation and image-to-video generation. Extensive experiments on challenging video and image generation benchmarks demonstrate that URSA consistently outperforms existing discrete methods and achieves performance comparable to state-of-the-art continuous diffusion methods. Code and models are available at https://github.com/baaivision/URSA
Related papers
- Inference-based GAN Video Generation [47.53991869205973]
We propose a new type of video generator by enabling adversarial-based unconditional video generators with a variational encoder.<n>Existing models struggle with the temporal scaling of the generated videos.<n>We employ a novel, memory-efficient approach to generate long videos composed of hundreds or thousands of frames.
arXiv Detail & Related papers (2025-12-25T20:14:38Z) - SemanticGen: Video Generation in Semantic Space [60.49729308406981]
State-of-the-art video generative models typically learn the distribution of video latents in the VAE space and map them to pixels using a VAE decoder.<n>We introduce SemanticGen, a novel solution to generate videos in the semantic space.<n>Our method is also effective and computationally efficient when extended to long video generation.
arXiv Detail & Related papers (2025-12-23T18:59:56Z) - Less Is More: Generating Time Series with LLaMA-Style Autoregression in Simple Factorized Latent Spaces [29.349850914912846]
We propose FAR-TS, a simple yet effective framework that combines disentangled factorization with an autoregressive Transformer over a discrete, quantized latent space to generate time series.<n>A LLaMA-style autoregressive Transformer then models these token sequences, enabling fast and controllable generation of sequences with arbitrary length.
arXiv Detail & Related papers (2025-11-07T04:15:38Z) - LongScape: Advancing Long-Horizon Embodied World Models with Context-Aware MoE [16.561410415129778]
LongScape is a hybrid framework that combines intra-chunk diffusion denoising with inter-chunk autoregressive causal generation.<n>Our core innovation is an action-guided, variable-length chunking mechanism that partitions video based on the semantic context of robotic actions.
arXiv Detail & Related papers (2025-09-26T02:47:05Z) - SwiftVideo: A Unified Framework for Few-Step Video Generation through Trajectory-Distribution Alignment [76.60024640625478]
Diffusion-based or flow-based models have achieved significant progress in video synthesis but require multiple iterative sampling steps.<n>We propose a unified and stable distillation framework that combines the advantages of trajectory-preserving and distribution-matching strategies.<n>Our method maintains high-quality video generation while substantially reducing the number of inference steps.
arXiv Detail & Related papers (2025-08-08T07:26:34Z) - LoViC: Efficient Long Video Generation with Context Compression [68.22069741704158]
We introduce LoViC, a DiT-based framework trained on million-scale open-domain videos.<n>At the core of our approach is FlexFormer, an expressive autoencoder that jointly compresses video and text into unified latent representations.
arXiv Detail & Related papers (2025-07-17T09:46:43Z) - Fast Autoregressive Video Generation with Diagonal Decoding [34.90521536645348]
Diagonal Decoding (DiagD) is a training-free inference acceleration algorithm for autoregressively pre-trained models.<n>Our method generates tokens along diagonal paths in the spatial-temporal token grid, enabling parallel decoding within each frame.<n>DiagD achieves up to $10times$ speedup compared to naive sequential decoding, while maintaining comparable visual fidelity.
arXiv Detail & Related papers (2025-03-18T09:42:55Z) - Rethinking Video Tokenization: A Conditioned Diffusion-based Approach [58.164354605550194]
New tokenizer, Diffusion Conditioned-based Gene Tokenizer, replaces GAN-based decoder with conditional diffusion model.<n>We trained using only a basic MSE diffusion loss for reconstruction, along with KL term and LPIPS perceptual loss from scratch.<n>Even a scaled-down version of CDT (3$times inference speedup) still performs comparably with top baselines.
arXiv Detail & Related papers (2025-03-05T17:59:19Z) - Discrete Diffusion Language Model for Efficient Text Summarization [19.267738861590487]
We introduce a novel semantic-aware noising process that enables Transformer backbones to handle long sequences effectively.<n>Our approaches achieve state-of-the-art performance on three benchmark summarization datasets: Gigaword, CNN/DailyMail, and Arxiv.
arXiv Detail & Related papers (2024-06-25T09:55:22Z) - MAVIN: Multi-Action Video Generation with Diffusion Models via Transition Video Infilling [19.004339956475498]
MAVIN is designed to generate transition videos that seamlessly connect two given videos, forming a cohesive integrated sequence.
We introduce a new metric, CLIP-RS (CLIP Relative Smoothness), to evaluate temporal coherence and smoothness, complementing traditional quality-based metrics.
Experimental results on horse and tiger scenarios demonstrate MAVIN's superior performance in generating smooth and coherent video transitions.
arXiv Detail & Related papers (2024-05-28T09:46:09Z) - Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World
Video Super-Resolution [65.91317390645163]
Upscale-A-Video is a text-guided latent diffusion framework for video upscaling.
It ensures temporal coherence through two key mechanisms: locally, it integrates temporal layers into U-Net and VAE-Decoder, maintaining consistency within short sequences.
It also offers greater flexibility by allowing text prompts to guide texture creation and adjustable noise levels to balance restoration and generation.
arXiv Detail & Related papers (2023-12-11T18:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.