Towards Redundancy Reduction in Diffusion Models for Efficient Video Super-Resolution
- URL: http://arxiv.org/abs/2509.23980v1
- Date: Sun, 28 Sep 2025 17:08:51 GMT
- Title: Towards Redundancy Reduction in Diffusion Models for Efficient Video Super-Resolution
- Authors: Jinpei Guo, Yifei Ji, Zheng Chen, Yufei Wang, Sizhuo Ma, Yong Guo, Yulun Zhang, Jian Wang,
- Abstract summary: Direct adapting generative diffusion models to video super-resolution (VSR) can result in redundancy.<n>OASIS is an efficient $textbfo$ne-step diffusion model with $textbfa$ttention $textbfs$pecialization for real-world v$textbfi$deo $textbfs$uper-resolution.<n>OASIS achieves state-of-the-art performance on both synthetic and real-world datasets.
- Score: 41.19210731686364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have recently shown promising results for video super-resolution (VSR). However, directly adapting generative diffusion models to VSR can result in redundancy, since low-quality videos already preserve substantial content information. Such redundancy leads to increased computational overhead and learning burden, as the model performs superfluous operations and must learn to filter out irrelevant information. To address this problem, we propose OASIS, an efficient $\textbf{o}$ne-step diffusion model with $\textbf{a}$ttention $\textbf{s}$pecialization for real-world v$\textbf{i}$deo $\textbf{s}$uper-resolution. OASIS incorporates an attention specialization routing that assigns attention heads to different patterns according to their intrinsic behaviors. This routing mitigates redundancy while effectively preserving pretrained knowledge, allowing diffusion models to better adapt to VSR and achieve stronger performance. Moreover, we propose a simple yet effective progressive training strategy, which starts with temporally consistent degradations and then shifts to inconsistent settings. This strategy facilitates learning under complex degradations. Extensive experiments demonstrate that OASIS achieves state-of-the-art performance on both synthetic and real-world datasets. OASIS also provides superior inference speed, offering a $\textbf{6.2$\times$}$ speedup over one-step diffusion baselines such as SeedVR2. The code will be available at \href{https://github.com/jp-guo/OASIS}{https://github.com/jp-guo/OASIS}.
Related papers
- Improved Adversarial Diffusion Compression for Real-World Video Super-Resolution [36.32266529540775]
One-step networks like SeedVR2, DOVE, and DLoRAL are heavy with billions of parameters and multi-second latency.<n>Recent adversarial diffusion compression (ADC) offers a promising path via pruning and distilling these models into a compact AdcSR network.<n>We propose an improved ADC method for Real-VSR that balances spatial details and temporal consistency.
arXiv Detail & Related papers (2026-02-28T04:30:54Z) - Light Forcing: Accelerating Autoregressive Video Diffusion via Sparse Attention [28.598033369607723]
textscLight Forcing is a textitfirst sparse attention solution tailored for AR video generation models.<n>It incorporates a textitChunk-Aware Growth mechanism to quantitatively estimate the contribution of each chunk.<n>We also introduce a textit Sparse Attention to capture informative historical and local context in a coarse-to-fine manner.
arXiv Detail & Related papers (2026-02-04T17:41:53Z) - Diffusion Beats Autoregressive in Data-Constrained Settings [50.56893491038853]
Autoregressive (AR) models have long dominated the landscape of large language models, driving progress across a wide range of tasks.<n>Recently, diffusion-based language models have emerged as a promising alternative, though their advantages over AR models remain underexplored.<n>We systematically study masked diffusion models in data-constrained settings where training involves repeated passes over limited data.<n>Our results suggest that when data, not compute, is the bottleneck, diffusion models offer a compelling alternative to the standard AR paradigm.
arXiv Detail & Related papers (2025-07-21T17:59:57Z) - LiftVSR: Lifting Image Diffusion to Video Super-Resolution via Hybrid Temporal Modeling with Only 4$\times$RTX 4090s [16.456543112614586]
Diffusion models have advanced video super-resolution (VSR) by enhancing perceptual quality.<n>We propose LiftVSR, an efficient VSR framework that leverages and mitigates the image-wise diffusion prior to PixArt-$alpha$, achieving state-of-the-art results.<n>Experiments on several typical VSR benchmarks have demonstrated that LiftVSR achieves impressive performance with significantly lower computational costs.
arXiv Detail & Related papers (2025-06-10T07:49:33Z) - DOVE: Efficient One-Step Diffusion Model for Real-World Video Super-Resolution [43.83739935393097]
We propose DOVE, an efficient one-step diffusion model for real-world video super-resolution.<n>DOVE is obtained by fine-tuning a pretrained video diffusion model (*i.e.*, CogVideoX)<n>Experiments show that DOVE exhibits comparable or superior performance to multi-step diffusion-based VSR methods.
arXiv Detail & Related papers (2025-05-22T05:16:45Z) - Rethinking Video Tokenization: A Conditioned Diffusion-based Approach [58.164354605550194]
New tokenizer, Diffusion Conditioned-based Gene Tokenizer, replaces GAN-based decoder with conditional diffusion model.<n>We trained using only a basic MSE diffusion loss for reconstruction, along with KL term and LPIPS perceptual loss from scratch.<n>Even a scaled-down version of CDT (3$times inference speedup) still performs comparably with top baselines.
arXiv Detail & Related papers (2025-03-05T17:59:19Z) - Adversarial Diffusion Compression for Real-World Image Super-Resolution [16.496532580598007]
Real-world image super-resolution aims to reconstruct high-resolution images from degraded low-resolution inputs.<n>One-step diffusion networks like OSEDiff and S3Diff alleviate this issue but still incur high computational costs.<n>This paper proposes a novel Real-ISR method, AdcSR, by distilling the one-step diffusion network OSEDiff into a streamlined diffusion-GAN model.
arXiv Detail & Related papers (2024-11-20T15:13:36Z) - Hierarchical Patch Diffusion Models for High-Resolution Video Generation [50.42746357450949]
We develop deep context fusion, which propagates context information from low-scale to high-scale patches in a hierarchical manner.
We also propose adaptive computation, which allocates more network capacity and computation towards coarse image details.
The resulting model sets a new state-of-the-art FVD score of 66.32 and Inception Score of 87.68 in class-conditional video generation.
arXiv Detail & Related papers (2024-06-12T01:12:53Z) - DeepCache: Accelerating Diffusion Models for Free [65.02607075556742]
DeepCache is a training-free paradigm that accelerates diffusion models from the perspective of model architecture.
DeepCache capitalizes on the inherent temporal redundancy observed in the sequential denoising steps of diffusion models.
Under the same throughput, DeepCache effectively achieves comparable or even marginally improved results with DDIM or PLMS.
arXiv Detail & Related papers (2023-12-01T17:01:06Z) - Investigating Tradeoffs in Real-World Video Super-Resolution [90.81396836308085]
Real-world video super-resolution (VSR) models are often trained with diverse degradations to improve generalizability.
To alleviate the first tradeoff, we propose a degradation scheme that reduces up to 40% of training time without sacrificing performance.
To facilitate fair comparisons, we propose the new VideoLQ dataset, which contains a large variety of real-world low-quality video sequences.
arXiv Detail & Related papers (2021-11-24T18:58:21Z) - VA-RED$^2$: Video Adaptive Redundancy Reduction [64.75692128294175]
We present a redundancy reduction framework, VA-RED$2$, which is input-dependent.
We learn the adaptive policy jointly with the network weights in a differentiable way with a shared-weight mechanism.
Our framework achieves $20% - 40%$ reduction in computation (FLOPs) when compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-02-15T22:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.