Component Divide-and-Conquer for Real-World Image Super-Resolution
- URL: http://arxiv.org/abs/2008.01928v1
- Date: Wed, 5 Aug 2020 04:26:26 GMT
- Title: Component Divide-and-Conquer for Real-World Image Super-Resolution
- Authors: Pengxu Wei, Ziwei Xie, Hannan Lu, Zongyuan Zhan, Qixiang Ye, Wangmeng
Zuo, Liang Lin
- Abstract summary: We present a large-scale Diverse Real-world image Super-Resolution dataset, i.e., DRealSR, as well as a divide-and-conquer Super-Resolution network.
DRealSR establishes a new SR benchmark with diverse real-world degradation processes.
We propose a Component Divide-and-Conquer (CDC) model and a Gradient-Weighted (GW) loss for SR.
- Score: 143.24770911629807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a large-scale Diverse Real-world image
Super-Resolution dataset, i.e., DRealSR, as well as a divide-and-conquer
Super-Resolution (SR) network, exploring the utility of guiding SR model with
low-level image components. DRealSR establishes a new SR benchmark with diverse
real-world degradation processes, mitigating the limitations of conventional
simulated image degradation. In general, the targets of SR vary with image
regions with different low-level image components, e.g., smoothness preserving
for flat regions, sharpening for edges, and detail enhancing for textures.
Learning an SR model with conventional pixel-wise loss usually is easily
dominated by flat regions and edges, and fails to infer realistic details of
complex textures. We propose a Component Divide-and-Conquer (CDC) model and a
Gradient-Weighted (GW) loss for SR. Our CDC parses an image with three
components, employs three Component-Attentive Blocks (CABs) to learn attentive
masks and intermediate SR predictions with an intermediate supervision learning
strategy, and trains an SR model following a divide-and-conquer learning
principle. Our GW loss also provides a feasible way to balance the difficulties
of image components for SR. Extensive experiments validate the superior
performance of our CDC and the challenging aspects of our DRealSR dataset
related to diverse real-world scenarios. Our dataset and codes are publicly
available at
https://github.com/xiezw5/Component-Divide-and-Conquer-for-Real-World-Image-Super-Resolution
Related papers
- Learning Dual-Level Deformable Implicit Representation for Real-World Scale Arbitrary Super-Resolution [81.74583887661794]
We build a new real-world super-resolution benchmark with both integer and non-integer scaling factors.
We propose a Dual-level Deformable Implicit Representation (DDIR) to solve real-world scale arbitrary super-resolution.
Our trained model achieves state-of-the-art performance on the RealArbiSR and RealSR benchmarks for real-world scale arbitrary super-resolution.
arXiv Detail & Related papers (2024-03-16T13:44:42Z) - Learning Many-to-Many Mapping for Unpaired Real-World Image
Super-resolution and Downscaling [60.80788144261183]
We propose an image downscaling and SR model dubbed as SDFlow, which simultaneously learns a bidirectional many-to-many mapping between real-world LR and HR images unsupervisedly.
Experimental results on real-world image SR datasets indicate that SDFlow can generate diverse realistic LR and SR images both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-10-08T01:48:34Z) - Towards Real-World Burst Image Super-Resolution: Benchmark and Method [93.73429028287038]
In this paper, we establish a large-scale real-world burst super-resolution dataset, i.e., RealBSR, to explore the faithful reconstruction of image details from multiple frames.
We also introduce a Federated Burst Affinity network (FBAnet) to investigate non-trivial pixel-wise displacement among images under real-world image degradation.
arXiv Detail & Related papers (2023-09-09T14:11:37Z) - Bridging Component Learning with Degradation Modelling for Blind Image
Super-Resolution [69.11604249813304]
We propose a components decomposition and co-optimization network (CDCN) for blind SR.
CDCN decomposes the input LR image into structure and detail components in feature space.
We present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process.
arXiv Detail & Related papers (2022-12-03T14:53:56Z) - Learning Structral coherence Via Generative Adversarial Network for
Single Image Super-Resolution [13.803141755183827]
Recent generative adversarial network (GAN) based SISR methods have yielded overall realistic SR images.
We introduce the gradient branch into the generator to preserve structural information by restoring high-resolution gradient maps in SR process.
In addition, we utilize a U-net based discriminator to consider both the whole image and the detailed per-pixel authenticity.
arXiv Detail & Related papers (2021-01-25T15:26:23Z) - Deep Cyclic Generative Adversarial Residual Convolutional Networks for
Real Image Super-Resolution [20.537597542144916]
We consider a deep cyclic network structure to maintain the domain consistency between the LR and HR data distributions.
We propose the Super-Resolution Residual Cyclic Generative Adversarial Network (SRResCycGAN) by training with a generative adversarial network (GAN) framework for the LR to HR domain translation.
arXiv Detail & Related papers (2020-09-07T11:11:18Z) - DDet: Dual-path Dynamic Enhancement Network for Real-World Image
Super-Resolution [69.2432352477966]
Real image super-resolution(Real-SR) focus on the relationship between real-world high-resolution(HR) and low-resolution(LR) image.
In this article, we propose a Dual-path Dynamic Enhancement Network(DDet) for Real-SR.
Unlike conventional methods which stack up massive convolutional blocks for feature representation, we introduce a content-aware framework to study non-inherently aligned image pair.
arXiv Detail & Related papers (2020-02-25T18:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.