SRTGAN: Triplet Loss based Generative Adversarial Network for Real-World
Super-Resolution
- URL: http://arxiv.org/abs/2211.12180v1
- Date: Tue, 22 Nov 2022 11:17:07 GMT
- Title: SRTGAN: Triplet Loss based Generative Adversarial Network for Real-World
Super-Resolution
- Authors: Dhruv Patel, Abhinav Jain, Simran Bawkar, Manav Khorasiya, Kalpesh
Prajapati, Kishor Upla, Kiran Raja, Raghavendra Ramachandra, and Christoph
Busch
- Abstract summary: An alternative solution called Single Image Super-Resolution (SISR) is a software-driven approach that aims to take a Low-Resolution (LR) image and obtain the HR image.
We introduce a new triplet-based adversarial loss function that exploits the information provided in the LR image by using it as a negative sample.
We propose to fuse the adversarial loss, content loss, perceptual loss, and quality loss to obtain Super-Resolution (SR) image with high perceptual fidelity.
- Score: 13.897062992922029
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many applications such as forensics, surveillance, satellite imaging, medical
imaging, etc., demand High-Resolution (HR) images. However, obtaining an HR
image is not always possible due to the limitations of optical sensors and
their costs. An alternative solution called Single Image Super-Resolution
(SISR) is a software-driven approach that aims to take a Low-Resolution (LR)
image and obtain the HR image. Most supervised SISR solutions use ground truth
HR image as a target and do not include the information provided in the LR
image, which could be valuable. In this work, we introduce Triplet Loss-based
Generative Adversarial Network hereafter referred as SRTGAN for Image
Super-Resolution problem on real-world degradation. We introduce a new
triplet-based adversarial loss function that exploits the information provided
in the LR image by using it as a negative sample. Allowing the patch-based
discriminator with access to both HR and LR images optimizes to better
differentiate between HR and LR images; hence, improving the adversary.
Further, we propose to fuse the adversarial loss, content loss, perceptual
loss, and quality loss to obtain Super-Resolution (SR) image with high
perceptual fidelity. We validate the superior performance of the proposed
method over the other existing methods on the RealSR dataset in terms of
quantitative and qualitative metrics.
Related papers
- Learning Many-to-Many Mapping for Unpaired Real-World Image
Super-resolution and Downscaling [60.80788144261183]
We propose an image downscaling and SR model dubbed as SDFlow, which simultaneously learns a bidirectional many-to-many mapping between real-world LR and HR images unsupervisedly.
Experimental results on real-world image SR datasets indicate that SDFlow can generate diverse realistic LR and SR images both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-10-08T01:48:34Z) - Memory-augmented Deep Unfolding Network for Guided Image
Super-resolution [67.83489239124557]
Guided image super-resolution (GISR) aims to obtain a high-resolution (HR) target image by enhancing the spatial resolution of a low-resolution (LR) target image under the guidance of a HR image.
Previous model-based methods mainly takes the entire image as a whole, and assume the prior distribution between the HR target image and the HR guidance image.
We propose a maximal a posterior (MAP) estimation model for GISR with two types of prior on the HR target image.
arXiv Detail & Related papers (2022-02-12T15:37:13Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - Perception Consistency Ultrasound Image Super-resolution via
Self-supervised CycleGAN [63.49373689654419]
We propose a new perception consistency ultrasound image super-resolution (SR) method based on self-supervision and cycle generative adversarial network (CycleGAN)
We first generate the HR fathers and the LR sons of the test ultrasound LR image through image enhancement.
We then make full use of the cycle loss of LR-SR-LR and HR-LR-SR and the adversarial characteristics of the discriminator to promote the generator to produce better perceptually consistent SR results.
arXiv Detail & Related papers (2020-12-28T08:24:04Z) - Super-Resolution of Real-World Faces [3.4376560669160394]
Real low-resolution (LR) face images contain degradations which are too varied and complex to be captured by known downsampling kernels.
In this paper, we propose a two module super-resolution network where the feature extractor module extracts robust features from the LR image.
We train a degradation GAN to convert bicubically downsampled clean images to real degraded images, and interpolate between the obtained degraded LR image and its clean LR counterpart.
arXiv Detail & Related papers (2020-11-04T17:25:54Z) - Deep Cyclic Generative Adversarial Residual Convolutional Networks for
Real Image Super-Resolution [20.537597542144916]
We consider a deep cyclic network structure to maintain the domain consistency between the LR and HR data distributions.
We propose the Super-Resolution Residual Cyclic Generative Adversarial Network (SRResCycGAN) by training with a generative adversarial network (GAN) framework for the LR to HR domain translation.
arXiv Detail & Related papers (2020-09-07T11:11:18Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z) - Closed-loop Matters: Dual Regression Networks for Single Image
Super-Resolution [73.86924594746884]
Deep neural networks have exhibited promising performance in image super-resolution.
These networks learn a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images.
We propose a dual regression scheme by introducing an additional constraint on LR data to reduce the space of the possible functions.
arXiv Detail & Related papers (2020-03-16T04:23:42Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z) - Unpaired Image Super-Resolution using Pseudo-Supervision [12.18340575383456]
We propose an unpaired image super-resolution (SR) method using a generative adversarial network.
Our network consists of an unpaired kernel/noise correction network and a pseudo-paired SR network.
Experiments on diverse datasets show that the proposed method is superior to existing solutions to the unpaired SR problem.
arXiv Detail & Related papers (2020-02-26T10:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.