Cross-Consistent Deep Unfolding Network for Adaptive All-In-One Video
Restoration
- URL: http://arxiv.org/abs/2309.01627v3
- Date: Mon, 11 Dec 2023 02:30:02 GMT
- Title: Cross-Consistent Deep Unfolding Network for Adaptive All-In-One Video
Restoration
- Authors: Yuanshuo Cheng, Mingwen Shao, Yecong Wan, Yuanjian Qiao, Wangmeng Zuo,
Deyu Meng
- Abstract summary: We propose a Cross-consistent Deep Unfolding Network (CDUN) for All-In-One VR.
By orchestrating two cascading procedures, CDUN achieves adaptive processing for diverse degradations.
In addition, we introduce a window-based inter-frame fusion strategy to utilize information from more adjacent frames.
- Score: 78.14941737723501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing Video Restoration (VR) methods always necessitate the individual
deployment of models for each adverse weather to remove diverse adverse weather
degradations, lacking the capability for adaptive processing of degradations.
Such limitation amplifies the complexity and deployment costs in practical
applications. To overcome this deficiency, in this paper, we propose a
Cross-consistent Deep Unfolding Network (CDUN) for All-In-One VR, which enables
the employment of a single model to remove diverse degradations for the first
time. Specifically, the proposed CDUN accomplishes a novel iterative
optimization framework, capable of restoring frames corrupted by corresponding
degradations according to the degradation features given in advance. To empower
the framework for eliminating diverse degradations, we devise a Sequence-wise
Adaptive Degradation Estimator (SADE) to estimate degradation features for the
input corrupted video. By orchestrating these two cascading procedures, CDUN
achieves adaptive processing for diverse degradation. In addition, we introduce
a window-based inter-frame fusion strategy to utilize information from more
adjacent frames. This strategy involves the progressive stacking of temporal
windows in multiple iterations, effectively enlarging the temporal receptive
field and enabling each frame's restoration to leverage information from
distant frames. Extensive experiments demonstrate that the proposed method
achieves state-of-the-art performance in All-In-One VR.
Related papers
- DiVE: Efficient Multi-View Driving Scenes Generation Based on Video Diffusion Transformer [56.98400572837792]
DiVE produces high-fidelity, temporally coherent, and cross-view consistent multi-view videos.
These innovations collectively achieve a 2.62x speedup with minimal quality degradation.
arXiv Detail & Related papers (2025-04-28T09:20:50Z) - Beyond Degradation Redundancy: Contrastive Prompt Learning for All-in-One Image Restoration [109.38288333994407]
Contrastive Prompt Learning (CPL) is a novel framework that fundamentally enhances prompt-task alignment.
Our framework establishes new state-of-the-art performance while maintaining parameter efficiency, offering a principled solution for unified image restoration.
arXiv Detail & Related papers (2025-04-14T08:24:57Z) - Vision-Language Gradient Descent-driven All-in-One Deep Unfolding Networks [14.180694577459425]
Vision-Language-guided Unfolding Network (VLU-Net) is a unified DUN framework for handling multiple degradation types simultaneously.
VLU-Net is the first all-in-one DUN framework and outperforms current leading one-by-one and all-in-one end-to-end methods by 3.74 dB on the SOTS dehazing dataset and 1.70 dB on the Rain100L deraining dataset.
arXiv Detail & Related papers (2025-03-21T08:02:48Z) - Dynamic Degradation Decomposition Network for All-in-One Image Restoration [3.856518745550605]
We introduce a dynamic degradation decomposition network for all-in-one image restoration, named D$3$Net.
D$3$Net achieves degradation-adaptive image restoration with guided prompt through cross-domain interaction and dynamic degradation decomposition.
Experiments on multiple image restoration tasks demonstrate that D$3$Net significantly outperforms the state-of-the-art approaches.
arXiv Detail & Related papers (2025-02-26T11:49:58Z) - TDM: Temporally-Consistent Diffusion Model for All-in-One Real-World Video Restoration [13.49297560533422]
Our method can restore various types of video degradation with a single unified model.
Our method advances the video restoration task by providing a unified solution that enhances video quality across multiple applications.
arXiv Detail & Related papers (2025-01-04T12:15:37Z) - Mixed Degradation Image Restoration via Local Dynamic Optimization and Conditional Embedding [67.57487747508179]
Multiple-in-one image restoration (IR) has made significant progress, aiming to handle all types of single degraded image restoration with a single model.
In this paper, we propose a novel multiple-in-one IR model that can effectively restore images with both single and mixed degradations.
arXiv Detail & Related papers (2024-11-25T09:26:34Z) - AllRestorer: All-in-One Transformer for Image Restoration under Composite Degradations [52.076067325999226]
We propose a novel Transformer-based restoration framework, AllRestorer.
AllRestorer adaptively considers all image impairments, thereby avoiding errors from scene descriptor misdirection.
We show that AllRestorer achieves a 5.00 dB increase in PSNR compared to the baseline on the CDD-11 dataset.
arXiv Detail & Related papers (2024-11-16T05:30:55Z) - All-in-one Weather-degraded Image Restoration via Adaptive Degradation-aware Self-prompting Model [23.940339806402882]
Existing approaches for all-in-one weather-degraded image restoration suffer from inefficiencies in leveraging degradation-aware priors.
We develop an adaptive degradation-aware self-prompting model (ADSM) for all-in-one weather-degraded image restoration.
arXiv Detail & Related papers (2024-11-12T00:07:16Z) - Chain-of-Restoration: Multi-Task Image Restoration Models are Zero-Shot Step-by-Step Universal Image Restorers [53.298698981438]
We propose Universal Image Restoration (UIR), a new task setting that requires models to be trained on a set of degradation bases and then remove any degradation that these bases can potentially compose in a zero-shot manner.
Inspired by the Chain-of-Thought which prompts LLMs to address problems step-by-step, we propose the Chain-of-Restoration (CoR)
CoR instructs models to step-by-step remove unknown composite degradations.
arXiv Detail & Related papers (2024-10-11T10:21:42Z) - OneRestore: A Universal Restoration Framework for Composite Degradation [33.556183375565034]
In real-world scenarios, image impairments often manifest as composite degradations, presenting a complex interplay of elements such as low light, haze, rain, and snow.
Our study proposes a versatile imaging model that consolidates four physical corruption paradigms to accurately represent complex, composite degradation scenarios.
OneRestore is a novel transformer-based framework designed for adaptive, controllable scene restoration.
arXiv Detail & Related papers (2024-07-05T16:27:00Z) - Low-Light Video Enhancement via Spatial-Temporal Consistent Illumination and Reflection Decomposition [68.6707284662443]
Low-Light Video Enhancement (LLVE) seeks to restore dynamic and static scenes plagued by severe invisibility and noise.
One critical aspect is formulating a consistency constraint specifically for temporal-spatial illumination and appearance enhanced versions.
We present an innovative video Retinex-based decomposition strategy that operates without the need for explicit supervision.
arXiv Detail & Related papers (2024-05-24T15:56:40Z) - Video Face Super-Resolution with Motion-Adaptive Feedback Cell [90.73821618795512]
Video super-resolution (VSR) methods have recently achieved a remarkable success due to the development of deep convolutional neural networks (CNN)
In this paper, we propose a Motion-Adaptive Feedback Cell (MAFC), a simple but effective block, which can efficiently capture the motion compensation and feed it back to the network in an adaptive way.
arXiv Detail & Related papers (2020-02-15T13:14:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.