DDet: Dual-path Dynamic Enhancement Network for Real-World Image
Super-Resolution
- URL: http://arxiv.org/abs/2002.11079v1
- Date: Tue, 25 Feb 2020 18:24:51 GMT
- Title: DDet: Dual-path Dynamic Enhancement Network for Real-World Image
Super-Resolution
- Authors: Yukai Shi, Haoyu Zhong, Zhijing Yang, Xiaojun Yang, Liang Lin
- Abstract summary: Real image super-resolution(Real-SR) focus on the relationship between real-world high-resolution(HR) and low-resolution(LR) image.
In this article, we propose a Dual-path Dynamic Enhancement Network(DDet) for Real-SR.
Unlike conventional methods which stack up massive convolutional blocks for feature representation, we introduce a content-aware framework to study non-inherently aligned image pair.
- Score: 69.2432352477966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Different from traditional image super-resolution task, real image
super-resolution(Real-SR) focus on the relationship between real-world
high-resolution(HR) and low-resolution(LR) image. Most of the traditional image
SR obtains the LR sample by applying a fixed down-sampling operator. Real-SR
obtains the LR and HR image pair by incorporating different quality optical
sensors. Generally, Real-SR has more challenges as well as broader application
scenarios. Previous image SR methods fail to exhibit similar performance on
Real-SR as the image data is not aligned inherently. In this article, we
propose a Dual-path Dynamic Enhancement Network(DDet) for Real-SR, which
addresses the cross-camera image mapping by realizing a dual-way dynamic
sub-pixel weighted aggregation and refinement. Unlike conventional methods
which stack up massive convolutional blocks for feature representation, we
introduce a content-aware framework to study non-inherently aligned image pair
in image SR issue. First, we use a content-adaptive component to exhibit the
Multi-scale Dynamic Attention(MDA). Second, we incorporate a long-term skip
connection with a Coupled Detail Manipulation(CDM) to perform collaborative
compensation and manipulation. The above dual-path model is joint into a
unified model and works collaboratively. Extensive experiments on the
challenging benchmarks demonstrate the superiority of our model.
Related papers
- Bridging the Domain Gap: A Simple Domain Matching Method for
Reference-based Image Super-Resolution in Remote Sensing [8.36527949191506]
Recently, reference-based image super-resolution (RefSR) has shown excellent performance in image super-resolution (SR) tasks.
We introduce a Domain Matching (DM) module that can be seamlessly integrated with existing RefSR models.
Our analysis reveals that their domain gaps often occur in different satellites, and our model effectively addresses these challenges.
arXiv Detail & Related papers (2024-01-29T08:10:00Z) - Learning Many-to-Many Mapping for Unpaired Real-World Image
Super-resolution and Downscaling [60.80788144261183]
We propose an image downscaling and SR model dubbed as SDFlow, which simultaneously learns a bidirectional many-to-many mapping between real-world LR and HR images unsupervisedly.
Experimental results on real-world image SR datasets indicate that SDFlow can generate diverse realistic LR and SR images both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-10-08T01:48:34Z) - Reference-based Image and Video Super-Resolution via C2-Matching [100.0808130445653]
We propose C2-Matching, which performs explicit robust matching crossing transformation and resolution.
C2-Matching significantly outperforms state of the arts on the standard CUFED5 benchmark.
We also extend C2-Matching to Reference-based Video Super-Resolution task, where an image taken in a similar scene serves as the HR reference image.
arXiv Detail & Related papers (2022-12-19T16:15:02Z) - Robust Reference-based Super-Resolution via C2-Matching [77.51610726936657]
Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image by introducing an additional high-resolution (HR) reference image.
Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images.
We propose C2-Matching, which produces explicit robust matching crossing transformation and resolution.
arXiv Detail & Related papers (2021-06-03T16:40:36Z) - Component Divide-and-Conquer for Real-World Image Super-Resolution [143.24770911629807]
We present a large-scale Diverse Real-world image Super-Resolution dataset, i.e., DRealSR, as well as a divide-and-conquer Super-Resolution network.
DRealSR establishes a new SR benchmark with diverse real-world degradation processes.
We propose a Component Divide-and-Conquer (CDC) model and a Gradient-Weighted (GW) loss for SR.
arXiv Detail & Related papers (2020-08-05T04:26:26Z) - Benefiting from Bicubically Down-Sampled Images for Learning Real-World
Image Super-Resolution [22.339751911637077]
We propose to handle real-world SR by splitting this ill-posed problem into two comparatively more well-posed steps.
First, we train a network to transform real LR images to the space of bicubically downsampled images in a supervised manner.
Second, we take a generic SR network trained on bicubically downsampled images to super-resolve the transformed LR image.
arXiv Detail & Related papers (2020-07-06T20:27:58Z) - Characteristic Regularisation for Super-Resolving Face Images [81.84939112201377]
Existing facial image super-resolution (SR) methods focus mostly on improving artificially down-sampled low-resolution (LR) imagery.
Previous unsupervised domain adaptation (UDA) methods address this issue by training a model using unpaired genuine LR and HR data.
This renders the model overstretched with two tasks: consistifying the visual characteristics and enhancing the image resolution.
We formulate a method that joins the advantages of conventional SR and UDA models.
arXiv Detail & Related papers (2019-12-30T16:27:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.