Relationship Quantification of Image Degradations
- URL: http://arxiv.org/abs/2212.04148v3
- Date: Sat, 5 Aug 2023 13:43:18 GMT
- Title: Relationship Quantification of Image Degradations
- Authors: Wenxin Wang, Boyun Li, Yuanbiao Gou, Peng Hu, Wangmeng Zuo and Xi Peng
- Abstract summary: Degradation Relationship Index (DRI) is defined as the mean drop rate difference in the validation loss between two models.
DRI always predicts performance improvement by using the specific degradation as an auxiliary to train models.
We propose a simple but effective method to estimate whether the given degradation combinations could improve the performance on the anchor degradation.
- Score: 72.98190570967937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study two challenging but less-touched problems in image
restoration, namely, i) how to quantify the relationship between image
degradations and ii) how to improve the performance of a specific restoration
task using the quantified relationship. To tackle the first challenge, we
proposed a Degradation Relationship Index (DRI) which is defined as the mean
drop rate difference in the validation loss between two models which are
respectively trained using the anchor degradation and the mixture of the anchor
and the auxiliary degradations. Through quantifying the degradation
relationship using DRI, we reveal that i) a positive DRI always predicts
performance improvement by using the specific degradation as an auxiliary to
train models; ii) the degradation proportion is crucial to the image
restoration performance. In other words, the restoration performance is
improved only if the anchor and the auxiliary degradations are mixed with an
appropriate proportion. Based on the observations, we further propose a simple
but effective method (dubbed DPD) to estimate whether the given degradation
combinations could improve the performance on the anchor degradation with the
assistance of the auxiliary degradation. Extensive experimental results verify
the effectiveness of our method in dehazing, denoising, deraining, and
desnowing. The code will be released after acceptance.
Related papers
- Dual-Representation Interaction Driven Image Quality Assessment with Restoration Assistance [11.983231834400698]
No-Reference Image Quality Assessment for distorted images has always been a challenging problem due to image content variance and distortion diversity.
Previous IQA models mostly encode explicit single-quality features of synthetic images to obtain quality-aware representations for quality score prediction.
We introduce the DRI method to obtain degradation vectors and quality vectors of images, which separately model the degradation and quality information of low-quality images.
arXiv Detail & Related papers (2024-11-26T12:48:47Z) - Efficient Diffusion as Low Light Enhancer [63.789138528062225]
Reflectance-Aware Trajectory Refinement (RATR) is a simple yet effective module to refine the teacher trajectory using the reflectance component of images.
textbfReflectance-aware textbfDiffusion with textbfDistilled textbfTrajectory (textbfReDDiT) is an efficient and flexible distillation framework tailored for Low-Light Image Enhancement (LLIE)
arXiv Detail & Related papers (2024-10-16T08:07:18Z) - Efficient Degradation-aware Any Image Restoration [83.92870105933679]
We propose textitDaAIR, an efficient All-in-One image restorer employing a Degradation-aware Learner (DaLe) in the low-rank regime.
By dynamically allocating model capacity to input degradations, we realize an efficient restorer integrating holistic and specific learning.
arXiv Detail & Related papers (2024-05-24T11:53:27Z) - Analysis of Deep Image Prior and Exploiting Self-Guidance for Image
Reconstruction [13.277067849874756]
We study how DIP recovers information from undersampled imaging measurements.
We introduce a self-driven reconstruction process that concurrently optimize both the network weights and the input.
Our method incorporates a novel denoiser regularization term which enables robust and stable joint estimation of both the network input and reconstructed image.
arXiv Detail & Related papers (2024-02-06T15:52:23Z) - Neural Degradation Representation Learning for All-In-One Image
Restoration [47.44349756954423]
We propose an all-in-one image restoration network that tackles multiple degradations.
We learn a neural degradation representation (NDR) that captures the underlying characteristics of various degradations.
We develop a degradation query module and a degradation injection module to effectively recognize and utilize the specific degradation based on NDR.
arXiv Detail & Related papers (2023-10-19T15:59:24Z) - DiracDiffusion: Denoising and Incremental Reconstruction with Assured Data-Consistency [24.5360032541275]
Diffusion models have established new state of the art in a multitude of computer vision tasks, including image restoration.
We propose a novel framework for inverse problem solving, namely we assume that the observation comes from a degradation process that gradually degrades and noises the original clean image.
Our technique maintains consistency with the original measurement throughout the reverse process, and allows for great flexibility in trading off perceptual quality for improved distortion metrics and sampling speedup via early-stopping.
arXiv Detail & Related papers (2023-03-25T04:37:20Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Hierarchical Similarity Learning for Aliasing Suppression Image
Super-Resolution [64.15915577164894]
A hierarchical image super-resolution network (HSRNet) is proposed to suppress the influence of aliasing.
HSRNet achieves better quantitative and visual performance than other works, and remits the aliasing more effectively.
arXiv Detail & Related papers (2022-06-07T14:55:32Z) - Dual Perceptual Loss for Single Image Super-Resolution Using ESRGAN [13.335546116599494]
This paper proposes a method called Dual Perceptual Loss (DP Loss) to replace the original perceptual loss to solve the problem of single image super-resolution reconstruction.
Due to the complementary property between the VGG features and the ResNet features, the proposed DP Loss considers the advantages of learning two features simultaneously.
The qualitative and quantitative analysis on benchmark datasets demonstrates the superiority of our proposed method over state-of-the-art super-resolution methods.
arXiv Detail & Related papers (2022-01-17T12:42:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.