Neural Degradation Representation Learning for All-In-One Image Restoration
- URL: http://arxiv.org/abs/2310.12848v2
- Date: Tue, 17 Dec 2024 10:44:49 GMT
- Title: Neural Degradation Representation Learning for All-In-One Image Restoration
- Authors: Mingde Yao, Ruikang Xu, Yuanshen Guan, Jie Huang, Zhiwei Xiong,
- Abstract summary: We propose an all-in-one image restoration network that tackles multiple degradations.<n>We learn a neural degradation representation (NDR) that captures the underlying characteristics of various degradations.<n>We develop a degradation query module and a degradation injection module to effectively recognize and utilize the specific degradation based on NDR.
- Score: 44.222096739644655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing methods have demonstrated effective performance on a single degradation type. In practical applications, however, the degradation is often unknown, and the mismatch between the model and the degradation will result in a severe performance drop. In this paper, we propose an all-in-one image restoration network that tackles multiple degradations. Due to the heterogeneous nature of different types of degradations, it is difficult to process multiple degradations in a single network. To this end, we propose to learn a neural degradation representation (NDR) that captures the underlying characteristics of various degradations. The learned NDR decomposes different types of degradations adaptively, similar to a neural dictionary that represents basic degradation components. Subsequently, we develop a degradation query module and a degradation injection module to effectively recognize and utilize the specific degradation based on NDR, enabling the all-in-one restoration ability for multiple degradations. Moreover, we propose a bidirectional optimization strategy to effectively drive NDR to learn the degradation representation by optimizing the degradation and restoration processes alternately. Comprehensive experiments on representative types of degradations (including noise, haze, rain, and downsampling) demonstrate the effectiveness and generalization capability of our method.
Related papers
- Dynamic Degradation Decomposition Network for All-in-One Image Restoration [3.856518745550605]
We introduce a dynamic degradation decomposition network for all-in-one image restoration, named D$3$Net.
D$3$Net achieves degradation-adaptive image restoration with guided prompt through cross-domain interaction and dynamic degradation decomposition.
Experiments on multiple image restoration tasks demonstrate that D$3$Net significantly outperforms the state-of-the-art approaches.
arXiv Detail & Related papers (2025-02-26T11:49:58Z) - Mixed Degradation Image Restoration via Local Dynamic Optimization and Conditional Embedding [67.57487747508179]
Multiple-in-one image restoration (IR) has made significant progress, aiming to handle all types of single degraded image restoration with a single model.
In this paper, we propose a novel multiple-in-one IR model that can effectively restore images with both single and mixed degradations.
arXiv Detail & Related papers (2024-11-25T09:26:34Z) - Chain-of-Restoration: Multi-Task Image Restoration Models are Zero-Shot Step-by-Step Universal Image Restorers [53.298698981438]
We propose Universal Image Restoration (UIR), a new task setting that requires models to be trained on a set of degradation bases and then remove any degradation that these bases can potentially compose in a zero-shot manner.
Inspired by the Chain-of-Thought which prompts LLMs to address problems step-by-step, we propose the Chain-of-Restoration (CoR)
CoR instructs models to step-by-step remove unknown composite degradations.
arXiv Detail & Related papers (2024-10-11T10:21:42Z) - Learning Dual Transformers for All-In-One Image Restoration from a Frequency Perspective [14.818622675158528]
This work aims to tackle the all-in-one image restoration task, which seeks to handle multiple types of degradation with a single model.
The primary challenge is to extract degradation representations from the input degraded images and use them to guide the model's adaptation to specific degradation types.
We propose a new dual-transformer approach comprising two components: a frequency-aware Degradation estimation transformer (Dformer) and a degradation-adaptive Restoration transformer (Rformer)
arXiv Detail & Related papers (2024-06-30T13:14:44Z) - Efficient Degradation-aware Any Image Restoration [83.92870105933679]
We propose textitDaAIR, an efficient All-in-One image restorer employing a Degradation-aware Learner (DaLe) in the low-rank regime.
By dynamically allocating model capacity to input degradations, we realize an efficient restorer integrating holistic and specific learning.
arXiv Detail & Related papers (2024-05-24T11:53:27Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Relationship Quantification of Image Degradations [72.98190570967937]
Degradation Relationship Index (DRI) is defined as the mean drop rate difference in the validation loss between two models.
DRI always predicts performance improvement by using the specific degradation as an auxiliary to train models.
We propose a simple but effective method to estimate whether the given degradation combinations could improve the performance on the anchor degradation.
arXiv Detail & Related papers (2022-12-08T09:05:19Z) - Learning Generalizable Latent Representations for Novel Degradations in
Super Resolution [29.706191592443027]
We propose to learn a latent representation space for degradations, which can be generalized from handcrafted (base) degradations to novel degradations.
The obtained representations for a novel degradation in this latent space are then leveraged to generate degraded images consistent with the novel degradation.
We conduct extensive experiments on both synthetic and real-world datasets to validate the effectiveness and advantages of our method for blind super-resolution with novel degradations.
arXiv Detail & Related papers (2022-07-25T16:22:30Z) - Unsupervised Degradation Representation Learning for Blind
Super-Resolution [27.788488575616032]
CNN-based super-resolution (SR) methods suffer a severe performance drop when the real degradation is different from their assumption.
We propose an unsupervised degradation representation learning scheme for blind SR without explicit degradation estimation.
Our network achieves state-of-the-art performance for the blind SR task.
arXiv Detail & Related papers (2021-04-01T11:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.