Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution
- URL: http://arxiv.org/abs/2005.00953v1
- Date: Sun, 3 May 2020 00:12:38 GMT
- Title: Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution
- Authors: Rao Muhammad Umer, Gian Luca Foresti, Christian Micheloni
- Abstract summary: We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
- Score: 31.934084942626257
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most current deep learning based single image super-resolution (SISR) methods
focus on designing deeper / wider models to learn the non-linear mapping
between low-resolution (LR) inputs and the high-resolution (HR) outputs from a
large number of paired (LR/HR) training data. They usually take as assumption
that the LR image is a bicubic down-sampled version of the HR image. However,
such degradation process is not available in real-world settings i.e. inherent
sensor noise, stochastic noise, compression artifacts, possible mismatch
between image degradation process and camera device. It reduces significantly
the performance of current SISR methods due to real-world image corruptions. To
address these problems, we propose a deep Super-Resolution Residual
Convolutional Generative Adversarial Network (SRResCGAN) to follow the
real-world degradation settings by adversarial training the model with
pixel-wise supervision in the HR domain from its generated LR counterpart. The
proposed network exploits the residual learning by minimizing the energy-based
objective function with powerful image regularization and convex optimization
techniques. We demonstrate our proposed approach in quantitative and
qualitative experiments that generalize robustly to real input and it is easy
to deploy for other down-scaling operators and mobile/embedded devices.
Related papers
- Enhanced Super-Resolution Training via Mimicked Alignment for Real-World Scenes [51.92255321684027]
We propose a novel plug-and-play module designed to mitigate misalignment issues by aligning LR inputs with HR images during training.
Specifically, our approach involves mimicking a novel LR sample that aligns with HR while preserving the characteristics of the original LR samples.
We comprehensively evaluate our method on synthetic and real-world datasets, demonstrating its effectiveness across a spectrum of SR models.
arXiv Detail & Related papers (2024-10-07T18:18:54Z) - Learning Many-to-Many Mapping for Unpaired Real-World Image
Super-resolution and Downscaling [60.80788144261183]
We propose an image downscaling and SR model dubbed as SDFlow, which simultaneously learns a bidirectional many-to-many mapping between real-world LR and HR images unsupervisedly.
Experimental results on real-world image SR datasets indicate that SDFlow can generate diverse realistic LR and SR images both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-10-08T01:48:34Z) - DCS-RISR: Dynamic Channel Splitting for Efficient Real-world Image
Super-Resolution [15.694407977871341]
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation.
Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels.
We propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR.
arXiv Detail & Related papers (2022-12-15T04:34:57Z) - Real Image Super-Resolution using GAN through modeling of LR and HR
process [20.537597542144916]
We propose a learnable adaptive sinusoidal nonlinearities incorporated in LR and SR models by directly learn degradation distributions.
We demonstrate the effectiveness of our proposed approach in quantitative and qualitative experiments.
arXiv Detail & Related papers (2022-10-19T09:23:37Z) - Memory-augmented Deep Unfolding Network for Guided Image
Super-resolution [67.83489239124557]
Guided image super-resolution (GISR) aims to obtain a high-resolution (HR) target image by enhancing the spatial resolution of a low-resolution (LR) target image under the guidance of a HR image.
Previous model-based methods mainly takes the entire image as a whole, and assume the prior distribution between the HR target image and the HR guidance image.
We propose a maximal a posterior (MAP) estimation model for GISR with two types of prior on the HR target image.
arXiv Detail & Related papers (2022-02-12T15:37:13Z) - Toward Real-world Image Super-resolution via Hardware-based Adaptive
Degradation Models [3.9037347042028254]
Most single image super-resolution (SR) methods are developed on synthetic low-resolution (LR) and high-resolution (HR) image pairs.
We propose a novel supervised method to simulate an unknown degradation process with the inclusion of prior hardware knowledge.
Experiments on the real-world datasets validate that our degradation model can estimate LR images more accurately than the predetermined degradation operation.
arXiv Detail & Related papers (2021-10-20T19:53:48Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z) - Frequency Consistent Adaptation for Real World Super Resolution [64.91914552787668]
We propose a novel Frequency Consistent Adaptation (FCA) that ensures the frequency domain consistency when applying Super-Resolution (SR) methods to the real scene.
We estimate degradation kernels from unsupervised images and generate the corresponding Low-Resolution (LR) images.
Based on the domain-consistent LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR models.
arXiv Detail & Related papers (2020-12-18T08:25:39Z) - Super-Resolution of Real-World Faces [3.4376560669160394]
Real low-resolution (LR) face images contain degradations which are too varied and complex to be captured by known downsampling kernels.
In this paper, we propose a two module super-resolution network where the feature extractor module extracts robust features from the LR image.
We train a degradation GAN to convert bicubically downsampled clean images to real degraded images, and interpolate between the obtained degraded LR image and its clean LR counterpart.
arXiv Detail & Related papers (2020-11-04T17:25:54Z) - Deep Cyclic Generative Adversarial Residual Convolutional Networks for
Real Image Super-Resolution [20.537597542144916]
We consider a deep cyclic network structure to maintain the domain consistency between the LR and HR data distributions.
We propose the Super-Resolution Residual Cyclic Generative Adversarial Network (SRResCycGAN) by training with a generative adversarial network (GAN) framework for the LR to HR domain translation.
arXiv Detail & Related papers (2020-09-07T11:11:18Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.