Reference-based Image Super-Resolution with Deformable Attention
Transformer
- URL: http://arxiv.org/abs/2207.11938v1
- Date: Mon, 25 Jul 2022 07:07:00 GMT
- Title: Reference-based Image Super-Resolution with Deformable Attention
Transformer
- Authors: Jiezhang Cao, Jingyun Liang, Kai Zhang, Yawei Li, Yulun Zhang, Wenguan
Wang, Luc Van Goo
- Abstract summary: RefSR aims to exploit auxiliary reference (Ref) images to super-resolve low-resolution (LR) images.
This paper proposes a deformable attention Transformer, namely DATSR, with multiple scales.
Experiments demonstrate that our DATSR achieves state-of-the-art performance on benchmark datasets.
- Score: 62.71769634254654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reference-based image super-resolution (RefSR) aims to exploit auxiliary
reference (Ref) images to super-resolve low-resolution (LR) images. Recently,
RefSR has been attracting great attention as it provides an alternative way to
surpass single image SR. However, addressing the RefSR problem has two critical
challenges: (i) It is difficult to match the correspondence between LR and Ref
images when they are significantly different; (ii) How to transfer the relevant
texture from Ref images to compensate the details for LR images is very
challenging. To address these issues of RefSR, this paper proposes a deformable
attention Transformer, namely DATSR, with multiple scales, each of which
consists of a texture feature encoder (TFE) module, a reference-based
deformable attention (RDA) module and a residual feature aggregation (RFA)
module. Specifically, TFE first extracts image transformation (e.g.,
brightness) insensitive features for LR and Ref images, RDA then can exploit
multiple relevant textures to compensate more information for LR features, and
RFA lastly aggregates LR features and relevant textures to get a more visually
pleasant result. Extensive experiments demonstrate that our DATSR achieves
state-of-the-art performance on benchmark datasets quantitatively and
qualitatively.
Related papers
- Bridging the Domain Gap: A Simple Domain Matching Method for
Reference-based Image Super-Resolution in Remote Sensing [8.36527949191506]
Recently, reference-based image super-resolution (RefSR) has shown excellent performance in image super-resolution (SR) tasks.
We introduce a Domain Matching (DM) module that can be seamlessly integrated with existing RefSR models.
Our analysis reveals that their domain gaps often occur in different satellites, and our model effectively addresses these challenges.
arXiv Detail & Related papers (2024-01-29T08:10:00Z) - A Feature Reuse Framework with Texture-adaptive Aggregation for
Reference-based Super-Resolution [29.57364804554312]
Reference-based super-resolution (RefSR) has gained considerable success in the field of super-resolution.
We propose a feature reuse framework that guides the step-by-step texture reconstruction process.
We introduce a single image feature embedding module and a texture-adaptive aggregation module.
arXiv Detail & Related papers (2023-06-02T12:49:22Z) - Reference-based Image and Video Super-Resolution via C2-Matching [100.0808130445653]
We propose C2-Matching, which performs explicit robust matching crossing transformation and resolution.
C2-Matching significantly outperforms state of the arts on the standard CUFED5 benchmark.
We also extend C2-Matching to Reference-based Video Super-Resolution task, where an image taken in a similar scene serves as the HR reference image.
arXiv Detail & Related papers (2022-12-19T16:15:02Z) - RRSR:Reciprocal Reference-based Image Super-Resolution with Progressive
Feature Alignment and Selection [66.08293086254851]
We propose a reciprocal learning framework to reinforce the learning of a RefSR network.
The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection.
We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm.
arXiv Detail & Related papers (2022-11-08T12:39:35Z) - RZSR: Reference-based Zero-Shot Super-Resolution with Depth Guided
Self-Exemplars [20.771851470271777]
Single image super-resolution (SISR) has demonstrated outstanding performance in generating high-resolution (HR) images from low-resolution (LR) images.
We propose an integrated solution, called reference-based zero-shot SR (RZSR)
arXiv Detail & Related papers (2022-08-24T05:48:17Z) - MASA-SR: Matching Acceleration and Spatial Adaptation for
Reference-Based Image Super-Resolution [74.24676600271253]
We propose the MASA network for RefSR, where two novel modules are designed to address these problems.
The proposed Match & Extraction Module significantly reduces the computational cost by a coarse-to-fine correspondence matching scheme.
The Spatial Adaptation Module learns the difference of distribution between the LR and Ref images, and remaps the distribution of Ref features to that of LR features in a spatially adaptive way.
arXiv Detail & Related papers (2021-06-04T07:15:32Z) - Robust Reference-based Super-Resolution via C2-Matching [77.51610726936657]
Super-Resolution (Ref-SR) has recently emerged as a promising paradigm to enhance a low-resolution (LR) input image by introducing an additional high-resolution (HR) reference image.
Existing Ref-SR methods mostly rely on implicit correspondence matching to borrow HR textures from reference images to compensate for the information loss in input images.
We propose C2-Matching, which produces explicit robust matching crossing transformation and resolution.
arXiv Detail & Related papers (2021-06-03T16:40:36Z) - Learning Texture Transformer Network for Image Super-Resolution [47.86443447491344]
We propose a Texture Transformer Network for Image Super-Resolution (TTSR)
TTSR consists of four closely-related modules optimized for image generation tasks.
TTSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-06-07T12:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.