Learning Generalizable Latent Representations for Novel Degradations in
Super Resolution
- URL: http://arxiv.org/abs/2207.12941v1
- Date: Mon, 25 Jul 2022 16:22:30 GMT
- Title: Learning Generalizable Latent Representations for Novel Degradations in
Super Resolution
- Authors: Fengjun Li, Xin Feng, Fanglin Chen, Guangming Lu and Wenjie Pei
- Abstract summary: We propose to learn a latent representation space for degradations, which can be generalized from handcrafted (base) degradations to novel degradations.
The obtained representations for a novel degradation in this latent space are then leveraged to generate degraded images consistent with the novel degradation.
We conduct extensive experiments on both synthetic and real-world datasets to validate the effectiveness and advantages of our method for blind super-resolution with novel degradations.
- Score: 29.706191592443027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Typical methods for blind image super-resolution (SR) focus on dealing with
unknown degradations by directly estimating them or learning the degradation
representations in a latent space. A potential limitation of these methods is
that they assume the unknown degradations can be simulated by the integration
of various handcrafted degradations (e.g., bicubic downsampling), which is not
necessarily true. The real-world degradations can be beyond the simulation
scope by the handcrafted degradations, which are referred to as novel
degradations. In this work, we propose to learn a latent representation space
for degradations, which can be generalized from handcrafted (base) degradations
to novel degradations. The obtained representations for a novel degradation in
this latent space are then leveraged to generate degraded images consistent
with the novel degradation to compose paired training data for SR model.
Furthermore, we perform variational inference to match the posterior of
degradations in latent representation space with a prior distribution (e.g.,
Gaussian distribution). Consequently, we are able to sample more high-quality
representations for a novel degradation to augment the training data for SR
model. We conduct extensive experiments on both synthetic and real-world
datasets to validate the effectiveness and advantages of our method for blind
super-resolution with novel degradations.
Related papers
- Degradation Oriented and Regularized Network for Blind Depth Super-Resolution [48.744290794713905]
In real-world scenarios, captured depth data often suffer from unconventional and unknown degradation due to sensor limitations and complex imaging environments.
We propose the Degradation Oriented and Regularized Network (DORNet), a novel framework designed to adaptively address unknown degradation in real-world scenes.
Our approach begins with the development of a self-supervised degradation learning strategy, which models the degradation representations of low-resolution depth data.
To facilitate effective RGB-D fusion, we further introduce a degradation-oriented feature transformation module that selectively propagates RGB content into the depth data based on the learned degradation priors.
arXiv Detail & Related papers (2024-10-15T14:53:07Z) - Content-decoupled Contrastive Learning-based Implicit Degradation Modeling for Blind Image Super-Resolution [33.16889233975723]
Implicit degradation modeling-based blind super-resolution (SR) has attracted more increasing attention in the community.
We propose a new Content-decoupled Contrastive Learning-based blind image super-resolution (CdCL) framework.
arXiv Detail & Related papers (2024-08-10T04:51:43Z) - Pairwise Distance Distillation for Unsupervised Real-World Image Super-Resolution [38.79439380482431]
Real-world super-resolution (RWSR) faces unknown degradations in the low-resolution inputs, all the while lacking paired training data.
Existing methods approach this problem by learning blind general models through complex synthetic augmentations on training inputs.
We introduce a novel pairwise distance distillation framework to address the unsupervised RWSR for a targeted real-world degradation.
arXiv Detail & Related papers (2024-07-10T01:46:40Z) - Preserving Full Degradation Details for Blind Image Super-Resolution [40.152015542099704]
We propose an alternative to learn degradation representations through reproducing degraded low-resolution (LR) images.
By guiding the degrader to reconstruct input LR images, full degradation information can be encoded into the representations.
Experiments show that our representations can extract accurate and highly robust degradation information.
arXiv Detail & Related papers (2024-07-01T13:54:59Z) - Incorporating Degradation Estimation in Light Field Spatial Super-Resolution [54.603510192725786]
We present LF-DEST, an effective blind Light Field SR method that incorporates explicit Degradation Estimation to handle various degradation types.
We conduct extensive experiments on benchmark datasets, demonstrating that LF-DEST achieves superior performance across a variety of degradation scenarios in light field SR.
arXiv Detail & Related papers (2024-05-11T13:14:43Z) - Efficient Test-Time Adaptation for Super-Resolution with Second-Order
Degradation and Reconstruction [62.955327005837475]
Image super-resolution (SR) aims to learn a mapping from low-resolution (LR) to high-resolution (HR) using paired HR-LR training images.
We present an efficient test-time adaptation framework for SR, named SRTTA, which is able to quickly adapt SR models to test domains with different/unknown degradation types.
arXiv Detail & Related papers (2023-10-29T13:58:57Z) - Exploiting Diffusion Prior for Real-World Image Super-Resolution [75.5898357277047]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution.
By employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model.
arXiv Detail & Related papers (2023-05-11T17:55:25Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Meta-Learning based Degradation Representation for Blind
Super-Resolution [54.93926549648434]
We propose a Meta-Learning based Region Degradation Aware SR Network (MRDA)
We use the MRDA to rapidly adapt to the specific complex degradation after several iterations and extract implicit degradation information.
A teacher network MRDA$_T$ is designed to further utilize the degradation information extracted by MLN for SR.
arXiv Detail & Related papers (2022-07-28T09:03:00Z) - Unsupervised Degradation Representation Learning for Blind
Super-Resolution [27.788488575616032]
CNN-based super-resolution (SR) methods suffer a severe performance drop when the real degradation is different from their assumption.
We propose an unsupervised degradation representation learning scheme for blind SR without explicit degradation estimation.
Our network achieves state-of-the-art performance for the blind SR task.
arXiv Detail & Related papers (2021-04-01T11:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.