Learning to Zoom-in via Learning to Zoom-out: Real-world
Super-resolution by Generating and Adapting Degradation
- URL: http://arxiv.org/abs/2001.02381v1
- Date: Wed, 8 Jan 2020 05:17:02 GMT
- Title: Learning to Zoom-in via Learning to Zoom-out: Real-world
Super-resolution by Generating and Adapting Degradation
- Authors: Dong Gong, Wei Sun, Qinfeng Shi, Anton van den Hengel, Yanning Zhang
- Abstract summary: We propose a framework to learn SR from an arbitrary set of unpaired LR and HR images.
We minimize the discrepancy between the generated data and real data while learning a degradation adaptive SR network.
The proposed unpaired method achieves state-of-the-art SR results on real-world images, even in the datasets that favor the paired-learning methods more.
- Score: 91.40265983636839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most learning-based super-resolution (SR) methods aim to recover
high-resolution (HR) image from a given low-resolution (LR) image via learning
on LR-HR image pairs. The SR methods learned on synthetic data do not perform
well in real-world, due to the domain gap between the artificially synthesized
and real LR images. Some efforts are thus taken to capture real-world image
pairs. The captured LR-HR image pairs usually suffer from unavoidable
misalignment, which hampers the performance of end-to-end learning, however.
Here, focusing on the real-world SR, we ask a different question: since
misalignment is unavoidable, can we propose a method that does not need LR-HR
image pairing and alignment at all and utilize real images as they are? Hence
we propose a framework to learn SR from an arbitrary set of unpaired LR and HR
images and see how far a step can go in such a realistic and "unsupervised"
setting. To do so, we firstly train a degradation generation network to
generate realistic LR images and, more importantly, to capture their
distribution (i.e., learning to zoom out). Instead of assuming the domain gap
has been eliminated, we minimize the discrepancy between the generated data and
real data while learning a degradation adaptive SR network (i.e., learning to
zoom in). The proposed unpaired method achieves state-of-the-art SR results on
real-world images, even in the datasets that favor the paired-learning methods
more.
Related papers
- Unveiling Hidden Details: A RAW Data-Enhanced Paradigm for Real-World Super-Resolution [56.98910228239627]
Real-world image super-resolution (Real SR) aims to generate high-fidelity, detail-rich high-resolution (HR) images from low-resolution (LR) counterparts.
Existing Real SR methods primarily focus on generating details from the LR RGB domain, often leading to a lack of richness or fidelity in fine details.
We pioneer the use of details hidden in RAW data to complement existing RGB-only methods, yielding superior outputs.
arXiv Detail & Related papers (2024-11-16T13:29:50Z) - Enhanced Super-Resolution Training via Mimicked Alignment for Real-World Scenes [51.92255321684027]
We propose a novel plug-and-play module designed to mitigate misalignment issues by aligning LR inputs with HR images during training.
Specifically, our approach involves mimicking a novel LR sample that aligns with HR while preserving the characteristics of the original LR samples.
We comprehensively evaluate our method on synthetic and real-world datasets, demonstrating its effectiveness across a spectrum of SR models.
arXiv Detail & Related papers (2024-10-07T18:18:54Z) - Learning Many-to-Many Mapping for Unpaired Real-World Image
Super-resolution and Downscaling [60.80788144261183]
We propose an image downscaling and SR model dubbed as SDFlow, which simultaneously learns a bidirectional many-to-many mapping between real-world LR and HR images unsupervisedly.
Experimental results on real-world image SR datasets indicate that SDFlow can generate diverse realistic LR and SR images both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-10-08T01:48:34Z) - Real-World Image Super-Resolution by Exclusionary Dual-Learning [98.36096041099906]
Real-world image super-resolution is a practical image restoration problem that aims to obtain high-quality images from in-the-wild input.
Deep learning-based methods have achieved promising restoration quality on real-world image super-resolution datasets.
We propose Real-World image Super-Resolution by Exclusionary Dual-Learning (RWSR-EDL) to address the feature diversity in perceptual- and L1-based cooperative learning.
arXiv Detail & Related papers (2022-06-06T13:28:15Z) - Benefiting from Bicubically Down-Sampled Images for Learning Real-World
Image Super-Resolution [22.339751911637077]
We propose to handle real-world SR by splitting this ill-posed problem into two comparatively more well-posed steps.
First, we train a network to transform real LR images to the space of bicubically downsampled images in a supervised manner.
Second, we take a generic SR network trained on bicubically downsampled images to super-resolve the transformed LR image.
arXiv Detail & Related papers (2020-07-06T20:27:58Z) - Unsupervised Real-world Image Super Resolution via Domain-distance Aware
Training [33.568321507711396]
We propose a novel domain-distance aware super-resolution (DASR) approach for unsupervised real-world image SR.
The proposed method is validated on synthetic and real datasets and the experimental results show that DASR consistently outperforms state-of-the-art unsupervised SR approaches.
arXiv Detail & Related papers (2020-04-02T17:59:03Z) - Closed-loop Matters: Dual Regression Networks for Single Image
Super-Resolution [73.86924594746884]
Deep neural networks have exhibited promising performance in image super-resolution.
These networks learn a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images.
We propose a dual regression scheme by introducing an additional constraint on LR data to reduce the space of the possible functions.
arXiv Detail & Related papers (2020-03-16T04:23:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.