Preserving Full Degradation Details for Blind Image Super-Resolution
- URL: http://arxiv.org/abs/2407.01299v2
- Date: Tue, 2 Jul 2024 08:39:21 GMT
- Title: Preserving Full Degradation Details for Blind Image Super-Resolution
- Authors: Hongda Liu, Longguang Wang, Ye Zhang, Kaiwen Xue, Shunbo Zhou, Yulan Guo,
- Abstract summary: We propose an alternative to learn degradation representations through reproducing degraded low-resolution (LR) images.
By guiding the degrader to reconstruct input LR images, full degradation information can be encoded into the representations.
Experiments show that our representations can extract accurate and highly robust degradation information.
- Score: 40.152015542099704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of image super-resolution relies heavily on the accuracy of degradation information, especially under blind settings. Due to absence of true degradation models in real-world scenarios, previous methods learn distinct representations by distinguishing different degradations in a batch. However, the most significant degradation differences may provide shortcuts for the learning of representations such that subtle difference may be discarded. In this paper, we propose an alternative to learn degradation representations through reproducing degraded low-resolution (LR) images. By guiding the degrader to reconstruct input LR images, full degradation information can be encoded into the representations. In addition, we develop an energy distance loss to facilitate the learning of the degradation representations by introducing a bounded constraint. Experiments show that our representations can extract accurate and highly robust degradation information. Moreover, evaluations on both synthetic and real images demonstrate that our ReDSR achieves state-of-the-art performance for the blind SR tasks.
Related papers
- Content-decoupled Contrastive Learning-based Implicit Degradation Modeling for Blind Image Super-Resolution [33.16889233975723]
Implicit degradation modeling-based blind super-resolution (SR) has attracted more increasing attention in the community.
We propose a new Content-decoupled Contrastive Learning-based blind image super-resolution (CdCL) framework.
arXiv Detail & Related papers (2024-08-10T04:51:43Z) - DaLPSR: Leverage Degradation-Aligned Language Prompt for Real-World Image Super-Resolution [19.33582308829547]
This paper proposes to leverage degradation-aligned language prompt for accurate, fine-grained, and high-fidelity image restoration.
The proposed method achieves a new state-of-the-art perceptual quality level.
arXiv Detail & Related papers (2024-06-24T09:30:36Z) - DeeDSR: Towards Real-World Image Super-Resolution via Degradation-Aware Stable Diffusion [27.52552274944687]
We introduce a novel two-stage, degradation-aware framework that enhances the diffusion model's ability to recognize content and degradation in low-resolution images.
In the first stage, we employ unsupervised contrastive learning to obtain representations of image degradations.
In the second stage, we integrate a degradation-aware module into a simplified ControlNet, enabling flexible adaptation to various degradations.
arXiv Detail & Related papers (2024-03-31T12:07:04Z) - Efficient Test-Time Adaptation for Super-Resolution with Second-Order
Degradation and Reconstruction [62.955327005837475]
Image super-resolution (SR) aims to learn a mapping from low-resolution (LR) to high-resolution (HR) using paired HR-LR training images.
We present an efficient test-time adaptation framework for SR, named SRTTA, which is able to quickly adapt SR models to test domains with different/unknown degradation types.
arXiv Detail & Related papers (2023-10-29T13:58:57Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Knowledge Distillation based Degradation Estimation for Blind
Super-Resolution [146.0988597062618]
Blind image super-resolution (Blind-SR) aims to recover a high-resolution (HR) image from its corresponding low-resolution (LR) input image with unknown degradations.
It is infeasible to provide concrete labels of multiple degradation combinations to supervise the degradation estimator training.
We propose a knowledge distillation based implicit degradation estimator network (KD-IDE) and an efficient SR network.
arXiv Detail & Related papers (2022-11-30T11:59:07Z) - Unsupervised Degradation Representation Learning for Blind
Super-Resolution [27.788488575616032]
CNN-based super-resolution (SR) methods suffer a severe performance drop when the real degradation is different from their assumption.
We propose an unsupervised degradation representation learning scheme for blind SR without explicit degradation estimation.
Our network achieves state-of-the-art performance for the blind SR task.
arXiv Detail & Related papers (2021-04-01T11:57:42Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z) - Real-world Person Re-Identification via Degradation Invariance Learning [111.86722193694462]
Person re-identification (Re-ID) in real-world scenarios usually suffers from various degradation factors, e.g., low-resolution, weak illumination, blurring and adverse weather.
We propose a degradation invariance learning framework for real-world person Re-ID.
By introducing a self-supervised disentangled representation learning strategy, our method is able to simultaneously extract identity-related robust features.
arXiv Detail & Related papers (2020-04-10T07:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.