RZSR: Reference-based Zero-Shot Super-Resolution with Depth Guided
Self-Exemplars
- URL: http://arxiv.org/abs/2208.11313v1
- Date: Wed, 24 Aug 2022 05:48:17 GMT
- Title: RZSR: Reference-based Zero-Shot Super-Resolution with Depth Guided
Self-Exemplars
- Authors: Jun-Sang Yoo, Dong-Wook Kim, Yucheng Lu, and Seung-Won Jung
- Abstract summary: Single image super-resolution (SISR) has demonstrated outstanding performance in generating high-resolution (HR) images from low-resolution (LR) images.
We propose an integrated solution, called reference-based zero-shot SR (RZSR)
- Score: 20.771851470271777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent methods for single image super-resolution (SISR) have demonstrated
outstanding performance in generating high-resolution (HR) images from
low-resolution (LR) images. However, most of these methods show their
superiority using synthetically generated LR images, and their generalizability
to real-world images is often not satisfactory. In this paper, we pay attention
to two well-known strategies developed for robust super-resolution (SR), i.e.,
reference-based SR (RefSR) and zero-shot SR (ZSSR), and propose an integrated
solution, called reference-based zero-shot SR (RZSR). Following the principle
of ZSSR, we train an image-specific SR network at test time using training
samples extracted only from the input image itself. To advance ZSSR, we obtain
reference image patches with rich textures and high-frequency details which are
also extracted only from the input image using cross-scale matching. To this
end, we construct an internal reference dataset and retrieve reference image
patches from the dataset using depth information. Using LR patches and their
corresponding HR reference patches, we train a RefSR network that is embodied
with a non-local attention module. Experimental results demonstrate the
superiority of the proposed RZSR compared to the previous ZSSR methods and
robustness to unseen images compared to other fully supervised SISR methods.
Related papers
- Bridging the Domain Gap: A Simple Domain Matching Method for
Reference-based Image Super-Resolution in Remote Sensing [8.36527949191506]
Recently, reference-based image super-resolution (RefSR) has shown excellent performance in image super-resolution (SR) tasks.
We introduce a Domain Matching (DM) module that can be seamlessly integrated with existing RefSR models.
Our analysis reveals that their domain gaps often occur in different satellites, and our model effectively addresses these challenges.
arXiv Detail & Related papers (2024-01-29T08:10:00Z) - Learning Many-to-Many Mapping for Unpaired Real-World Image
Super-resolution and Downscaling [60.80788144261183]
We propose an image downscaling and SR model dubbed as SDFlow, which simultaneously learns a bidirectional many-to-many mapping between real-world LR and HR images unsupervisedly.
Experimental results on real-world image SR datasets indicate that SDFlow can generate diverse realistic LR and SR images both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-10-08T01:48:34Z) - ICF-SRSR: Invertible scale-Conditional Function for Self-Supervised
Real-world Single Image Super-Resolution [60.90817228730133]
Single image super-resolution (SISR) is a challenging problem that aims to up-sample a given low-resolution (LR) image to a high-resolution (HR) counterpart.
Recent approaches are trained on simulated LR images degraded by simplified down-sampling operators.
We propose a novel Invertible scale-Conditional Function (ICF) which can scale an input image and then restore the original input with different scale conditions.
arXiv Detail & Related papers (2023-07-24T12:42:45Z) - SRTGAN: Triplet Loss based Generative Adversarial Network for Real-World
Super-Resolution [13.897062992922029]
An alternative solution called Single Image Super-Resolution (SISR) is a software-driven approach that aims to take a Low-Resolution (LR) image and obtain the HR image.
We introduce a new triplet-based adversarial loss function that exploits the information provided in the LR image by using it as a negative sample.
We propose to fuse the adversarial loss, content loss, perceptual loss, and quality loss to obtain Super-Resolution (SR) image with high perceptual fidelity.
arXiv Detail & Related papers (2022-11-22T11:17:07Z) - RRSR:Reciprocal Reference-based Image Super-Resolution with Progressive
Feature Alignment and Selection [66.08293086254851]
We propose a reciprocal learning framework to reinforce the learning of a RefSR network.
The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection.
We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm.
arXiv Detail & Related papers (2022-11-08T12:39:35Z) - Real Image Super-Resolution using GAN through modeling of LR and HR
process [20.537597542144916]
We propose a learnable adaptive sinusoidal nonlinearities incorporated in LR and SR models by directly learn degradation distributions.
We demonstrate the effectiveness of our proposed approach in quantitative and qualitative experiments.
arXiv Detail & Related papers (2022-10-19T09:23:37Z) - Reference-based Image Super-Resolution with Deformable Attention
Transformer [62.71769634254654]
RefSR aims to exploit auxiliary reference (Ref) images to super-resolve low-resolution (LR) images.
This paper proposes a deformable attention Transformer, namely DATSR, with multiple scales.
Experiments demonstrate that our DATSR achieves state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-07-25T07:07:00Z) - MASA-SR: Matching Acceleration and Spatial Adaptation for
Reference-Based Image Super-Resolution [74.24676600271253]
We propose the MASA network for RefSR, where two novel modules are designed to address these problems.
The proposed Match & Extraction Module significantly reduces the computational cost by a coarse-to-fine correspondence matching scheme.
The Spatial Adaptation Module learns the difference of distribution between the LR and Ref images, and remaps the distribution of Ref features to that of LR features in a spatially adaptive way.
arXiv Detail & Related papers (2021-06-04T07:15:32Z) - DDet: Dual-path Dynamic Enhancement Network for Real-World Image
Super-Resolution [69.2432352477966]
Real image super-resolution(Real-SR) focus on the relationship between real-world high-resolution(HR) and low-resolution(LR) image.
In this article, we propose a Dual-path Dynamic Enhancement Network(DDet) for Real-SR.
Unlike conventional methods which stack up massive convolutional blocks for feature representation, we introduce a content-aware framework to study non-inherently aligned image pair.
arXiv Detail & Related papers (2020-02-25T18:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.