Robust Reference-based Super-Resolution via C2-Matching
- URL: http://arxiv.org/abs/2106.01863v1
- Date: Thu, 3 Jun 2021 16:40:36 GMT
- Title: Robust Reference-based Super-Resolution via C2-Matching
- Authors: Yuming Jiang, Kelvin C.K. Chan, Xintao Wang, Chen Change Loy, Ziwei
Liu
- Abstract summary: Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image by introducing an additional high-resolution (HR) reference image.
Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images.
We propose C2-Matching, which produces explicit robust matching crossing transformation and resolution.
- Score: 77.51610726936657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reference-based Super-Resolution (Ref-SR) has recently emerged as a promising
paradigm to enhance a low-resolution (LR) input image by introducing an
additional high-resolution (HR) reference image. Existing Ref-SR methods mostly
rely on implicit correspondence matching to borrow HR textures from reference
images to compensate for the information loss in input images. However,
performing local transfer is difficult because of two gaps between input and
reference images: the transformation gap (e.g. scale and rotation) and the
resolution gap (e.g. HR and LR). To tackle these challenges, we propose
C2-Matching in this work, which produces explicit robust matching crossing
transformation and resolution. 1) For the transformation gap, we propose a
contrastive correspondence network, which learns transformation-robust
correspondences using augmented views of the input image. 2) For the resolution
gap, we adopt a teacher-student correlation distillation, which distills
knowledge from the easier HR-HR matching to guide the more ambiguous LR-HR
matching. 3) Finally, we design a dynamic aggregation module to address the
potential misalignment issue. In addition, to faithfully evaluate the
performance of Ref-SR under a realistic setting, we contribute the
Webly-Referenced SR (WR-SR) dataset, mimicking the practical usage scenario.
Extensive experiments demonstrate that our proposed C2-Matching significantly
outperforms state of the arts by over 1dB on the standard CUFED5 benchmark.
Notably, it also shows great generalizability on WR-SR dataset as well as
robustness across large scale and rotation transformations.
Related papers
- Reference-based Image and Video Super-Resolution via C2-Matching [100.0808130445653]
We propose C2-Matching, which performs explicit robust matching crossing transformation and resolution.
C2-Matching significantly outperforms state of the arts on the standard CUFED5 benchmark.
We also extend C2-Matching to Reference-based Video Super-Resolution task, where an image taken in a similar scene serves as the HR reference image.
arXiv Detail & Related papers (2022-12-19T16:15:02Z) - Reference-based Image Super-Resolution with Deformable Attention
Transformer [62.71769634254654]
RefSR aims to exploit auxiliary reference (Ref) images to super-resolve low-resolution (LR) images.
This paper proposes a deformable attention Transformer, namely DATSR, with multiple scales.
Experiments demonstrate that our DATSR achieves state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-07-25T07:07:00Z) - Learning Resolution-Adaptive Representations for Cross-Resolution Person
Re-Identification [49.57112924976762]
Cross-resolution person re-identification problem aims to match low-resolution (LR) query identity images against high resolution (HR) gallery images.
It is a challenging and practical problem since the query images often suffer from resolution degradation due to the different capturing conditions from real-world cameras.
This paper explores an alternative SR-free paradigm to directly compare HR and LR images via a dynamic metric, which is adaptive to the resolution of a query image.
arXiv Detail & Related papers (2022-07-09T03:49:51Z) - ResiDualGAN: Resize-Residual DualGAN for Cross-Domain Remote Sensing
Images Semantic Segmentation [15.177834801688979]
The performance of a semantic segmentation model for remote sensing (RS) images pretrained on an annotated dataset would greatly decrease when testing on another unannotated dataset because of the domain gap.
Adversarial generative methods, e.g., DualGAN, are utilized for unpaired image-to-image translation to minimize the pixel-level domain gap.
In this paper, ResiDualGAN is proposed for RS images translation, where a resizer module is used for addressing the scale discrepancy of RS datasets.
arXiv Detail & Related papers (2022-01-27T13:56:54Z) - MASA-SR: Matching Acceleration and Spatial Adaptation for
Reference-Based Image Super-Resolution [74.24676600271253]
We propose the MASA network for RefSR, where two novel modules are designed to address these problems.
The proposed Match & Extraction Module significantly reduces the computational cost by a coarse-to-fine correspondence matching scheme.
The Spatial Adaptation Module learns the difference of distribution between the LR and Ref images, and remaps the distribution of Ref features to that of LR features in a spatially adaptive way.
arXiv Detail & Related papers (2021-06-04T07:15:32Z) - DDet: Dual-path Dynamic Enhancement Network for Real-World Image
Super-Resolution [69.2432352477966]
Real image super-resolution(Real-SR) focus on the relationship between real-world high-resolution(HR) and low-resolution(LR) image.
In this article, we propose a Dual-path Dynamic Enhancement Network(DDet) for Real-SR.
Unlike conventional methods which stack up massive convolutional blocks for feature representation, we introduce a content-aware framework to study non-inherently aligned image pair.
arXiv Detail & Related papers (2020-02-25T18:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.